How Rackspace CDN works and what are the requirements?

The Rackspace CDN isn’t too complicated to setup. If you wish to configure the CDN with an origin dedicated server, there is really only one main requirement;

For example, if the CDN domain rackspace gives you for use with the CDN product is cdn.customer.com.cdn306.raxcdn.com, you will be required to update the website configuration (origin) for Apache on this server.

i.e. you need to include the CDN domain cdn.customer.com.cdn306.raxcdn.com as ServerAlias for Apache or as Server_Name for Nginx. Then that server alias/virtualhost receives and handles the request for the CDN.

Basically for CDN request comes in to rackspace raxcdn.com domain, and if there is no cached version on a local edgenode, the CDN makes a call to your origin server IP configured, sending a HTTP header hostname. This is indeed why you need to setup a serveralias for the rackspace.raxcdn.com hostname, once it is created.

Provided that files can be accessed on the IP address of your origin dedicated server. i.e.

http://4.2.2.4/mycdn/images/profile.jpg

When request comes into http://cdn.customer.com.cdn306.raxcdn.com it will if not cached try the origin server.

If you wish, you can add an additional CNAME of your own for your own domain.

Example only:
i.e. cdn.mydomain.com CNAME cdn.customer.com.cdn306.raxcdn.com

This allows customer to make request to cdn.mydomain.com/mycdn/images/profile.jpg which forwards to Rackspace CDN, and if it has no cached file, it will forward to origin server the request, and cache the file around the edge nodes. The edge nodes are transparent, and the only thing you really need to worry about is configuring the virtualhost correctly.

The only reason you require the virtualhost, is because the CDN uses hostheader to allow your server to identify the virtualhost serving CDN. This naturally allows you to serve CDN on the same IP address. Naturally, if you intend to use SSL as well you may wish to consider a dedicated IP and find this community article of use.

https://support.rackspace.com/how-to/create-a-rackspace-cdn-service/

Rackspace can help you configure the virtualhost and potentially may even be able to help you configure CDN product with your dedicated server origin as well.

For pricing, please see the CDN calculator at;

www.rackspace.com/pricing
https://support.rackspace.com/how-to/rackspace-cdn-faq/

Cheers &
Best wishes,
Adam

Troubleshooting Akamai CDN using Pragma Headers

Rackspace Cloud Files CDN enabled containers and the Rackspace CDN product can occasionally have an issue, in such cases, you can troubleshoot this a lot easier by using the DEBUG headers. To do this add -D and the folowing -H headers for your output

curl -I http://rackspacecdnurlgoeshereprivatecensored.r17.cf3.rackcdn.com/common/test/generator.png -D - -H "Pragma: akamai-x-get-client-ip, akamai-x-cache-on, akamai-x-cache-remote-on, akamai-x-check-cacheable, akamai-x-get-cache-key, akamai-x-get-extracted-values, akamai-x-get-nonces, akamai-x-get-ssl-client-session-id, akamai-x-get-true-cache-key, akamai-x-serial-no, akamai-x-feo-trace, akamai-x-get-request-id" -L

Naturally for this to work, you need to make sure you have a test URL. You can get one from Cloud Files in the Rackspace Control panel, or from whatever origin is configured with the CDN. Just login to the origin, and find a file path like httpdocs/somefolder/somefile.png and then get the raxcdn.com URL from the Rackspace CDN product page, or the Rackspace Cloud Files ‘show all links’ page on the cog icon next to the cloud files container. And add the path behind your documentroot.

So for the CDN URL rackspacecdnurlgoeshereprivatecensored.r17.cf3.rackcdn.com

For the files on the origin in the documentroot of the website configured with the CDN just add the ‘local’ document path within httpdocs or your www folder!

i.e. rackspacecdnurlgoeshereprivatecensored.r17.cf3.rackcdn.com becomes rackspacecdnurlgoeshereprivatecensored.r17.cf3.rackcdn.com/somefolder/somefile.png

Comparing Files on the internet or CDN with MD5 to determine if they present same content

So, a customer today was having some issues with their CDN. They said that their SSL CDN was presenting a different image, than the HTTP CDN. So, I thought the best way to begin any troubleshooting process would firstly be to try and recreate those issues. To do that, I need a way to compare the files programmatically, enter md5sum a handly little shell application usually installed by default on most Linux OS.

[user@cbast3 ~]$ curl https://3485asd3jjc839c9d3-08e84cacaacfcebda9281e3a9724b749.ssl.cf3.rackcdn.com/companies/5825cb13f2e6c9632807d103/header.jpeg -o file ; cat file | md5sum
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  382k  100  382k    0     0  1726k      0 --:--:-- --:--:-- --:--:-- 1732k
e917a67bbe34d4eb2d4fe5a87ce90de0  -
[user@cbast3 ~]$ curl http://3485asd3jjc839c9d3-08e84cacaacfcebda9281e3a9724b749.r45.cf3.rackcdn.com/companies/5825cb13f2e6c9632807d103/header.jpeg -o file2 ; cat file2 | md5sum
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  382k  100  382k    0     0  2071k      0 --:--:-- --:--:-- --:--:-- 2081k
e917a67bbe34d4eb2d4fe5a87ce90de0  -

As we can see from the output of both, the md5sum (the hashing) of the two files is the same, this means there is a statistically very very very high chance the content is exactly the same, especially when passing several hundred characters or more. The hashing algorithm is combination based, so the more characters, the less likely same combination is of coming around twice!

In this case I was able to disprove the customers claim’s. Not because I wanted to, but because I wanted to solve their issue. These results show me, the issue must be, if it is with the CDN, with a local edgenode local to the customer having the issue. Since I am unable to recreate it from my location, it is therefore not unreasonable to assume that it is a client side issue, or a failure on our CDN edgenode side, local to the customer. That’s how I troubleshooted this, and quite happy with this one! Took about 2 minutes to do, and a few minutes to come up with. A quick and useful check indeed, which reduces the number of possibilities considerably in tracing down the issue!

Cheers &
Best wishes,
Adam

Please note the real CDN location has been altered for privacy reasons

Creating a proper Method of Retrieving, Sorting, and Parsing Rackspace CDN Access Logs

So, this has been rather a bane on the life which is lived as Adam Bull. Basically, a large customer of ours had 50+ CDN’s, and literally hundreds of gigabytes of Log Files. They were all in Rackspace Cloud Files, and the big question was ‘how do I know how busy my CDN is?’.

screen-shot-2016-11-07-at-12-41-30-pm

This is a remarkably good question, because actually, not many tools are provided here, and the customer will, much like on many other CDN services, have to download those logs, and then process them. But that is actually not easier either, and I spent a good few weeks (albeit when I had time), trying to figure out the best way to do this. I dabbled with using tree to display the most commonly used logs, I played with piwik, awstats, and many others such as goaccess, all to no avail, and even used a sophisticated AWK script from our good friends in Operations. No luck, nothing, do not pass go, or collect $200. So, I was actually forced to write something to try and achieve this, from start to finish. There are 3 problems.

1) how to easily obtain .CDN_ACCESS_LOGS from Rackspace Cloud Files to Cloud Server (or remote).
2) how to easily process these logs, in which format.
3) how to easily present these logs, using which application.

The first challenge was actually retrieving the files.

swiftly --verbose --eventlet --concurrency=100 get .CDN_ACCESS_LOGS --all-objects -o ./

Naturally to perform this step above, you will need a working, and setup swiftly environment. If you don’t know what swiftly, is or understand how to set up a swiftly envrionment, please see this article I wrote on the subject of deleting all files with swiftly (The howto explains the environment setup first! Just don’t follow the article to the end, and continue from here, once you’ve setup and installed swiftly)

Fore more info see:
https://community.rackspace.com/products/f/25/t/7190

Processing the Rackspace CDN Logs that we’ve downloaded, and organising them for further log processing
This required a lot more effort, and thought

The below script sits in the same folder as all of the containers

# ls -al 
total 196
drwxrwxr-x 36 root root  4096 Nov  7 12:33 .
drwxr-xr-x  6 root root  4096 Nov  7 12:06 ..
# used by my script
-rw-rw-r--  1 root root  1128 Nov  7 12:06 alldirs.txt

# CDN Log File containers as we downloaded them from swiftly Rackspace Cloud Files (.CDN_ACCESS_LOGS)
drwxrwxr-x  3 root root  4096 Oct 19 11:22 dev.demo.video.cdn..com
drwxrwxr-x  3 root root  4096 Oct 19 11:22 europe.assets.lon.tv
drwxrwxr-x  5 root root  4096 Oct 19 11:22 files.lon.cdn.lon.com
drwxrwxr-x  3 root root  4096 Oct 19 11:23 files.blah.cdn..com
drwxrwxr-x  5 root root  4096 Oct 19 11:24 files.demo.cdn..com
drwxrwxr-x  3 root root  4096 Oct 19 11:25 files.invesco.cdn..com
drwxrwxr-x  3 root root  4096 Oct 19 11:25 files.test.cdn..com
-rw-r--r--  1 root root   561 Nov  7 12:02 generate-report.sh
-rwxr-xr-x  1 root root  1414 Nov  7 12:15 logparser.sh

# Used by my script
drwxr-xr-x  2 root root  4096 Nov  7 12:06 parsed
drwxr-xr-x  2 root root  4096 Nov  7 12:33 parsed-combined
#!/bin/bash

# Author : Adam Bull
# Title: Rackspace CDN Log Parser
# Date: November 7th 2016

echo "Deleting previous jobs"
rm -rf parsed;
rm -rf parsed-combined

ls -ld */ | awk '{print $9}' | grep -v parsed > alldirs.txt


# Create Location for Combined File Listing for CDN LOGS
mkdir parsed

# Create Location for combined CDN or ACCESS LOGS
mkdir parsed-combined

# This just builds a list of the CDN Access Logs
echo "Building list of Downloaded .CDN_ACCESS_LOG Files"
sleep 3
while read m; do
folder=$(echo "$m" | sed 's@/@@g')
echo $folder
        echo "$m" | xargs -i find ./{} -type f -print > "parsed/$folder.log"
done < alldirs.txt

# This part cats the files and uses xargs to produce all the Log oiutput, before cut processing and redirecting to parsed-combined/$folder
echo "Combining .CDN_ACCESS_LOG Files for bulk processing and converting into NCSA format"
sleep 3
while read m; do
folder=$(echo "$m" | sed 's@/@@g')
cat "parsed/$folder.log" | xargs -i zcat {} | cut -d' ' -f1-10  > "parsed-combined/$folder"
done < alldirs.txt


# This part processes the Log files with Goaccess, generating HTML reports
echo "Generating Goaccess HTML Logs"
sleep 3
while read m; do
folder=$(echo "$m" | sed 's@/@@g')
goaccess -f "parsed-combined/$folder" -a -o "/var/www/html/$folder.html"
done < alldirs.txt

How to easily present these logs

I kind of deceived you with the last step. Actually, because I have already done it, with the above script. Though, you will naturally need to have an httpd installed, and a documentroot in /var/www/html, so make sure you install apache2:

yum install httpd awstats

De de de de de de da! da da!

screen-shot-2016-11-07-at-12-41-30-pm

Some little caveats:

Generating a master index.html file of all the sites


[root@cdn-log-parser-mother html]# pwd
/var/www/html
[root@cdn-log-parser-mother html]# ls -al | awk '{print $9}' | xargs -i echo " {}
" > index.html

I will expand the script to generate this automatically soon, but for now, leaving like this due to time constraints.

Creating basic Log Tree’s for Rackspace CDN

Well, this was a little bit of a hack, but I share it because it’s quite cool

#!/bin/bash

# Simple script thaqt is designed to process Rackspace .CDN_ACCESS_LOGS recursively
# once they are download with swiftly

# to download the CDN logs use
# swiftly --verbose --eventlet --concurrency=100 get .CDN_ACCESS_LOGS --all-objects -o ./
alldirs=$(ls -al -1);

echo "$alldirs" | awk '{print $9}' > alldirs.txt



for item in `cat alldirs.txt`

 do
        echo " --- CONTAINER $item START ---"
        tree -L 2 $item;
 echo " --- CONTAINER $item END --- "
 printf "\n\n"
done

Taking strace output from stderr and piping to other utilities

Well, this is a strange thing to do, but say you want to know how fast an application is processing data. how to tell? Enter strace, and… a bit of wc -l with the assistance of tee and 2>&1 proper redirection.

strace -p 9653 2>&1 | tee >(wc -l)

where 9653 is the process id (pid) and wc -l is the command you want to pipe to!

read(4, "2.74.2.34 - - [26/05/2015:15:15"..., 8192) = 8192
read(4, "o) Version/7.1.6 Safari/537.85.1"..., 8192) = 8192
read(4, "ident/6.0)\"\n91.233.154.8 - - [26"..., 8192) = 8192
^C1290

1290 lines in the output.. perfect, thats what I wanted to know, roughly how quickly this log parser is going thru my logs 😀

Concatenating all Rackspace CDN Logs into a single File

In my previous article Retrieve CDN log files via swiftly I shown how to download all of the CDN logs.

After downloading all of the CDN logs, you will likely want to parse them. However, the way that Rackspace presently log the CDN is a different log file for each hour, on each day, on each month, and year. So this actually is convenient if you require the logs seperate, but if you want to parse them in something like awstats, piwik, or other log parsers like goaccess, it would help if they are all part of the same file.

Here’s how I achieved it (where /home/adam/cdn is the path of the cdn logs. Don’t worry this will pickup ALL log files inside there, at least the ones that are gz files

find /home/adam/cdn -type f | xargs -i zcat {} > alldomains.cdn.log

I could have probably used -type gz or similar and used select to find everything. It works nicely though. It’s a quickie, but a goodie.

Troubleshooting Rackspace CDN not serving files

A customer came to us with an issue with their CDN which was strange and odd. I wanted to document this so that it is understood why this happened.

The customer is using two TLS origins, and HTTP2. Why that is a problem will become evidently clear. This is a general method of troubleshooting in terms of replicating behaviour of the CDN and origin with host headers. This can be applied no matter what the problem, to understand the HTTP code given by the origin, which at least half of the time turns out to be the cause. The origin being the cloud-server your CDN is backed by.

Question
Hi,
We are currently experiencing some issues with the Cloud CDN. We are using this for our CSS and images and now everything is getting a HTTP/503 SERVICE UNAVAILABLE. If you want to test, you may test this url:
https://cdn.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css

This is supposed to deliver this file:
https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css

Is something mis-configured or are there some issues on the appliance?

Answer

First we confirm the origin is UP

# curl -I https://originserver.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css
HTTP/1.1 200 OK
Date: Tue, 11 Oct 2016 08:45:42 GMT
Server: Apache
Last-Modified: Tue, 11 Oct 2016 06:57:53 GMT
ETag: "ed26-53e91653c61d0"
Accept-Ranges: bytes
Content-Length: 60710
Vary: Accept-Encoding
Cache-Control: max-age=31536000, public
Expires: Wed, 11 Oct 2017 08:45:42 GMT
Access-Control-Allow-Origin: *
X-Frame-Options: SAMEORIGIN
Content-Type: text/css

The origin is the cloud-server where the CDN pulls from. As we can see the site is up. So what is causing the issue? The way CDN works is it provides a host header for the domain, so the site has to have a host for both domains. The reason is that the CDN uses CNAME hostnames to identify which CDN is which. I.e. which path like /media/ directs to which static origins subdomain.

The best way to look further at the situation now is to check the origin (the subdomain that you’ve associated with your CDN subdomain that raxspace gives you, when sending the host header for the CDN url we get:

root@myweb:~# curl -I https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css -H 'host: cdn.cusomerdomain.no'
HTTP/1.1 421 Misdirected Request
Date: Tue, 11 Oct 2016 08:17:38 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

As we can see we get this odd HTTP 421 misdirected request.

# curl -I https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css -H 'host: mycdnname1.scdn4.secure.raxcdn.com'
HTTP/1.1 421 Misdirected Request
Date: Tue, 11 Oct 2016 08:18:06 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

~# curl -I https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css -H 'host: cdncustomercname.cusomerdomain.com'
HTTP/1.1 421 Misdirected Request
Date: Tue, 11 Oct 2016 08:17:45 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

https://httpd.apache.org/docs/2.4/mod/mod_http2.html

Looking at the definition for HTTP 2, this issue was caused by different TLS configurations for your domains and mod http2 trying to reuse the same connection, which will not work if the TLS configurations are not the same on the origin cloud-server side.

You just need to disable HTTP2, or configure the TLS configurations to be the same on the apache2 side. I hope that this clarifies and makes sense to you, of course if you have additional questions, comments or concerns please don't hesitate to reach out to us, we are here to help!

As you can see the importance of debugging CDN by sending host header to the origin that the CDN uses, to replicate the issue the customer was experiencing, which was essentially, the CDN edgenodes (the machines around the world that pull from the origin for content distribution really worldwide), weren't able to retrieve the files from the origin with the host header domain that is defined in the control panel.

This customer needed to in this case adjust their Apache2 configuration. The problem was likely caused by updating Apache2 or similar.

Retrieve all CDN Rackcdn.com URL’s and URI

So, today we had a customer ask if I could trace down the containers for their specific domain CNAMES.

First I would dig the CNAME the customer setup. For instance cdn.customerdomain.com. This would give me a rackcdn.com link like;

# dig adam.haxed.me.uk

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.2 <<>> adam.haxed.me.uk
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19402
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1500
;; QUESTION SECTION:
;adam.haxed.me.uk.		IN	A

;; ANSWER SECTION:
adam.haxed.me.uk.	3600	IN	CNAME	ceb47133a715104a5805-6490a1e5c1b40c9f5aaee7a62e1812f7.r59.cf3.rackcdn.com.
ceb47133a715104a5805-6490a1e5c1b40c9f5aaee7a62e1812f7.r59.cf3.rackcdn.com. 300 IN CNAME	a59.rackcdn.com.
a59.rackcdn.com.	281	IN	CNAME	a59.rackcdn.com.mdc.edgesuite.net.
a59.rackcdn.com.mdc.edgesuite.net. 300 IN CNAME	a61.dscg10.akamai.net.
a61.dscg10.akamai.net.	1	IN	A	104.86.110.99
a61.dscg10.akamai.net.	1	IN	A	104.86.110.115

;; Query time: 39 msec
;; SERVER: 83.138.151.81#53(83.138.151.81)
;; WHEN: Thu Apr 14 09:15:25 UTC 2016
;; MSG SIZE  rcvd: 261

This would give me detail of the CDN URL that my TLD points to. But, what if I am trying to track down the container, like my customer was? I will now create a script to list ALL rackCDN url's. Then we can search for the ceb47133 CNAME that adam.haxed.me.uk points to. This will give us a 'name' of the cloud files container that the rackcdn is associated/connected with.

USERNAME='mycloudusername'
APIKEY='mycloudapikey'

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

TENANT=10045567
API_ENDPOINT="https://cdn3.clouddrive.com/v1/MossoCloudFS_$TENANT"
#API_ENDPOINT="https://global.cdn.api.rackspacecloud.com/v1.0/$TENANT"
#API_ENDPOINT="https://cdn3.clouddrive.com/v1/MossoCloudFS_c2ad0d46-31e2-4c31-a60b-b611bb8e5f8b2"

curl -v -X GET $API_ENDPOINT/?format=json \
-H "X-Auth-Token: $TOKEN" | python -mjson.tool

It's well worth noting the API ENDPOINT is different from customer to customer so you may wish to retrieve all of the endpoints to check you have the right CDN endpoint. See at the bottom of the page how to check your endpoint is correct if you get permission error, it is likely the API endpoint. It's different for each customer I have learnt.

[
    {
        "cdn_enabled": true,
        "cdn_ios_uri": "http://a30ae7cddb38b2112bce-03b08b0e5c91ea60f938585ef20a12d7.iosr.cf3.rackcdn.com",
        "cdn_ssl_uri": "https://1627826b1dc042d6b3be-03b08b0e5c91ea60f938585ef20a12d7.ssl.cf3.rackcdn.com",
        "cdn_streaming_uri": "http://ee7e9298372b91eea2d2-03b08b0e5c91ea60f938585ef20a12d7.r91.stream.cf3.rackcdn.com",
        "cdn_uri": "http://beb2ec8d649b0d717ef9-03b08b0e5c91ea60f938585ef20a12d7.r91.cf3.rackcdn.com",
        "log_retention": false,
        "name": "some.com.cdn.container",
        "ttl": 86400
    },
    {
        "cdn_enabled": true,
        "cdn_ios_uri": "http://0381268aadeda8ceab1e-37d5bb63c6aad292ad490c7fddb2f62f.iosr.cf3.rackcdn.com",
        "cdn_ssl_uri": "https://5b190eda013130300b94-37d5bb63c6aad292ad490c7fddb2f62f.ssl.cf3.rackcdn.com",
        "cdn_streaming_uri": "http://5f756e93360bbef82e84-37d5bb63c6aad292ad490c7fddb2f62f.r75.stream.cf3.rackcdn.com",
        "cdn_uri": "http://47aabb1759520adb10a1-37d5bb63c6aad292ad490c7fddb2f62f.r75.cf3.rackcdn.com",
        "log_retention": false,
        "name": "container-001",
        "ttl": 604800
    },
    {
        "cdn_enabled": true,
        "cdn_ios_uri": "http://006acc500edc34a84075-1257f240203d0254bc8c5602aafda48d.iosr.cf3.rackcdn.com",
        "cdn_ssl_uri": "https://b68de0566314da76870d-1257f240203d0254bc8c5602aafda48d.ssl.cf3.rackcdn.com",
        "cdn_streaming_uri": "http://632bed500bfc691eb677-1257f240203d0254bc8c5602aafda48d.r49.stream.cf3.rackcdn.com",
        "cdn_uri": "http://b52a6ade17a64c459d85-1257f240203d0254bc8c5602aafda48d.r49.cf3.rackcdn.com",
        "log_retention": false,
        "name": "container-002",
        "ttl": 604800
    },
    {
        "cdn_enabled": true,
        "cdn_ios_uri": "http://38d59ebf089e8ebe00a0-6490a1e5c1b40c9f5aaee7a62e1812f7.iosr.cf3.rackcdn.com",
        "cdn_ssl_uri": "https://02a84412d877be1b8313-6490a1e5c1b40c9f5aaee7a62e1812f7.ssl.cf3.rackcdn.com",
        "cdn_streaming_uri": "http://b8b8fe52062f7fb25f43-6490a1e5c1b40c9f5aaee7a62e1812f7.r59.stream.cf3.rackcdn.com",
        "cdn_uri": "http://ceb47133a715104a5805-6490a1e5c1b40c9f5aaee7a62e1812f7.r59.cf3.rackcdn.com",
        "log_retention": false,
        "name": "scripts",
        "ttl": 259200
    },
    {
        "cdn_enabled": true,
        "cdn_ios_uri": "http://0c29cc67d5299ac41fa0-1426fb5304d7a905cdef320e9b667254.iosr.cf3.rackcdn.com",
        "cdn_ssl_uri": "https://4df79706147258ab315b-1426fb5304d7a905cdef320e9b667254.ssl.cf3.rackcdn.com",
        "cdn_streaming_uri": "http://66baf30a268d99e66228-1426fb5304d7a905cdef320e9b667254.r68.stream.cf3.rackcdn.com",
        "cdn_uri": "http://8b27955f0b728515adde-1426fb5304d7a905cdef320e9b667254.r68.cf3.rackcdn.com",
        "log_retention": false,
        "name": "test",
        "ttl": 259200
    },
    {
        "cdn_enabled": true,
        "cdn_ios_uri": "http://cc1d82abf0fbfced78b7-53ad0106578d82de3911abdf4b56c326.iosr.cf3.rackcdn.com",
        "cdn_ssl_uri": "https://7173244627f44933cf9e-53ad0106578d82de3911abdf4b56c326.ssl.cf3.rackcdn.com",
        "cdn_streaming_uri": "http://dd74f1300c187bb447f3-53ad0106578d82de3911abdf4b56c326.r30.stream.cf3.rackcdn.com",
        "cdn_uri": "http://cb7b587bb6e7186c9308-53ad0106578d82de3911abdf4b56c326.r30.cf3.rackcdn.com",
        "log_retention": false,
        "name": "test2",
        "ttl": 259200
    }
]
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool`

As we can see below the ceb46 rackcdn link is the 'scripts' container. the CNAME adam.haxed.me.uk points to the rackcdn.com domain http://ceb47133a715104a5805-6490a1e5c1b40c9f5aaee7a62e1812f7.r59.cf3.rackcdn.com which is 'pointing' at the cloud files 'scripts' folder.

    {
        "cdn_enabled": true,
        "cdn_ios_uri": "http://38d59ebf089e8ebe00a0-6490a1e5c1b40c9f5aaee7a62e1812f7.iosr.cf3.rackcdn.com",
        "cdn_ssl_uri": "https://02a84412d877be1b8313-6490a1e5c1b40c9f5aaee7a62e1812f7.ssl.cf3.rackcdn.com",
        "cdn_streaming_uri": "http://b8b8fe52062f7fb25f43-6490a1e5c1b40c9f5aaee7a62e1812f7.r59.stream.cf3.rackcdn.com",
        "cdn_uri": "http://ceb47133a715104a5805-6490a1e5c1b40c9f5aaee7a62e1812f7.r59.cf3.rackcdn.com",
        "log_retention": false,
        "name": "scripts",
        "ttl": 259200
    },

Simple enough!