Creating Isolated Cloud Networks thru API in Rackspace Cloud

Hey! So, today I was playing around with Cloud Networking API and thought I would document the basic process of creating a network. It’s simple enough and follows the precise same logic as many of my other tutorials on cloud files, load balancers and etc.

#!/bin/bash

USERNAME='mycloudusername'
APIKEY='mycloudapikey'
ACCOUNT_NUMBER=10010101
API_ENDPOINT="https://lon.networks.api.rackspacecloud.com/v2.0/"

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-X POST -d @create-network.json "$API_ENDPOINT/networks" | python -mjson.tool

For the above code to create a new network you need to create the create-network.json markup file, it needs to look like and be in this format:

{
    "network":
    {
        "name": "Isolatednet",
        "shared": false,
        "tenant_id": "10010101"
    }
}

It’s important to note you need to define the tenant_id, thats your account number that appears in the URL when you login to mycloud control panel.

Output looks like

* Connection #0 to host lon.networks.api.rackspacecloud.com left intact
{
    "network": {
        "admin_state_up": true,
        "id": "ae36972f-5cba-4327-8bff-15d8b05dc3ee",
        "name": "Isolatednet",
        "shared": false,
        "status": "ACTIVE",
        "subnets": [],
        "tenant_id": "10045567"
    }
}

MySQL running out of memory being killed by OOM Killer

So, every now and then we get customers asking if they can increase their memory because MySQL keeps on being killled by the kernel, mainly because on say a 2GB physical RAM server, MySQL eats it all up and even tries to use more than is there. So the kernel scheduler is like ‘no.. stop that’. This kind of thing could be avoided by configuring MySQL with proper limits as to not flood the physical hardware.

This is often and commonly overlooked in MySQL databases, and no tuning is done, but it’s important to base the MySQL configuration (/etc/my.conf) on the physical hardware of the server. So IF you increase the RAM on the server, to get the optimum speed you’d want to increase some of the RAM usage. A friend of mine said of a great trick used by many organisations of pointing the mysql filesystem to memory, this is a great performance increase, as it completely avoids the filesystem, the only downside is if the box turns off, the database is gone 😀

innodb_buffer_pool_size = 384M
key_buffer = 256M
query_cache_size = 1M
query_cache_limit = 128M
thread_cache_size = 8
max_connections = 400
innodb_lock_wait_timeout = 100

I found this config for a 2GB server on stack overflow, and it looks just about right. Adjusting the connections to suit should ensure that the box doesn’t get too overloaded, but also the memory is important too. One thing to bare in mind, by restricting RAM queries might not run as fast, but the database won’t suddenly go offline and process be killed this way. That’s what you want, really isn’t it;.

Using Meta-data to track Rackspace Cloud Servers

Hey, so from time to time we have customers who ask us about how they can tag their servers, this might be for automation or for means of organising their servers. Whilst it’s not possible to tag servers thru the API in such a way that it shows the ‘tag’ in the UI, that you can add in the mycloud control panel. You can instead use the cloud server meta-data set command, it’s easy enough. Here is how I achieved it.

set-meta-data.sh

#!/bin/bash

USERNAME='mycloudusername'
APIKEY='mycloudapikey'
# Tenant ID (account number is the number shown in the URL address when logged into Rackspace control panel)
ACCOUNT_NUMBER=1001010
API_ENDPOINT="https://lon.servers.api.rackspacecloud.com/v2/$ACCOUNT_NUMBER"
SERVER_ID='e9036384-c9be-4c8c-8551-c2f269c424bc'

# This just grabs from a large JSON output the AUTH TOKEN for the API. We auth with the apikey, and we get the auth token and set it in this variable 'TOKEN'
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


# Then we re-use the $TOKEN we retrieved for the call to the API, supply the $ACCOUNT_NUMBER and importantly, the $API_ENDPOINT.
# Also we sent a file, metadata.json that contains the meta-data we want to add to the server.
curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-X PUT -d @metadata.json -H "content-type: application/json" "$API_ENDPOINT/servers/$SERVER_ID/metadata" | python -mjson.tool

metadata.json

{
    "metadata": {
        "Label" : "MyServer",
        "Version" : "v1.0.1-2"
    }
}
chmod +x set-meta-data.sh
./set-meta-data.sh

OK , so now you’ve set the data.

What about retrieving it you ask? That’s not too difficult. Just remove the PUT and replace it with a GET, and take away the -d @metadata.json bit, and we’re off, like so:

get-meta-data.sh


#!/bin/bash

USERNAME='mycloudusername'
APIKEY='mycloudapikey'
ACCOUNT_NUMBER=1001010
API_ENDPOINT="https://lon.servers.api.rackspacecloud.com/v2/$ACCOUNT_NUMBER"
SERVER_ID='c2036384-c9be-4c8c-8551-c2f269c4249r'


TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`



curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-X GET "$API_ENDPOINT/servers/$SERVER_ID/metadata" | python -mjson.tool

simples! and as the fonz would say ‘Hey, grades are not cool, learning is cool. ‘

chmod +x get-meta-data.sh
./get-meta-data.sh 

Troubleshooting Networking, initial steps

At work we see a lot of stuff come up from day to day, and one of the issues which we see every now and then is networking issues. Specifically Rackspace Cloud Networks (project name is neutron). This is the ‘Rackspace’ implementation of isolated network entities. Amazon use VPC, virtual private cloud, but the concepts are quite similar.

In this case one of our customers web machines wasn’t able to ping other machines. The first thing I did was ask the customer to ping the other machine from their web machine, and ping the web machine from the other machine.

In this case the customer was reporting problems with isolated network, (i.e. not public or private interfaces), so not eth0, or eth1, but the eth2 interface in this case. Here is what my tcpdump on the hypervisor looked like.

$ tcpdump vif{domainid}{network}

10:28:30.542146 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype ARP (0x0806), length 42: Request who-has 192.168.66.19 tell 192.168.66.3, length 28
10:28:30.542486 bc:76:4e:08:43:86 > bc:76:4e:09:2a:69, ethertype ARP (0x0806), length 42: Reply 192.168.66.19 is-at bc:76:4e:08:43:86, length 28
10:28:30.571805 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo request, id 29516, seq 6, length 64
10:28:31.579785 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo request, id 29516, seq 7, length 64
10:28:32.587837 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo request, id 29516, seq 8, length 64

As we can see as 192.168.66.19 is being pinged by 192.168.66.3 but there is no ping reply. If it had a reply it would look different, something like:

192.168.66.3 < 192.168.66.19: ICMP echo request, id 29516, seq 7, length 64

192.168.66.3 is broadcasting the ARP request. Asking the local router to tell it what macid 192.168.66.19 has. This is answered and the physical hardware mac address is given '192.168.66.19 is-at bc:76:4e:08:43:86', but still 192.168.66.3 isn't sending an ping echo reply.

From the ARP request we can see 192.168.66.3 knows where to physically send the packet reply to .19 and this goes thru the local switch to reach the router. On the router there is a routing table that manages which macid is destined for which ip.

In this case something wrong was happening. For some reason 192.168.66.3 wasn't able to reply to the pings from 192.168.66.19, even with the physical hardware mac address.

However the weird thing is, the problem suddenly went away again!

11:09:26.735818 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo request, id 29516, seq 2453, length 64
11:09:27.197715 bc:76:4e:08:43:86 > bc:76:4e:09:2a:69, ethertype IPv4 (0x0800), length 98: 192.168.66.19 > 192.168.66.3: ICMP echo request, id 53772, seq 232, length 64
11:09:27.198315 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo reply, id 53772, seq 232, length 64
11:09:27.743907 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo request, id 29516, seq 2454, length 64
11:09:28.198486 bc:76:4e:08:43:86 > bc:76:4e:09:2a:69, ethertype IPv4 (0x0800), length 98: 192.168.66.19 > 192.168.66.3: ICMP echo request, id 53772, seq 233, length 64
11:09:28.201819 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo reply, id 53772, seq 233, length 64
11:09:28.751893 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo request, id 29516, seq 2455, length 64
11:09:29.203245 bc:76:4e:08:43:86 > bc:76:4e:09:2a:69, ethertype IPv4 (0x0800), length 98: 192.168.66.19 > 192.168.66.3: ICMP echo request, id 53772, seq 234, length 64
11:09:29.203737 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo reply, id 53772, seq 234, length 64
11:09:29.759691 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo request, id 29516, seq 2456, length 64
11:09:30.203991 bc:76:4e:08:43:86 > bc:76:4e:09:2a:69, ethertype IPv4 (0x0800), length 98: 192.168.66.19 > 192.168.66.3: ICMP echo request, id 53772, seq 235, length 64
11:09:30.204516 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo reply, id 53772, seq 235, length 64

All of a sudden the echo reply were coming back from 192.168.66.3 and it was finding 192.168.66.19.

192.168.66.3 pings 192.168.66.19

11:09:27.197715 bc:76:4e:08:43:86 > bc:76:4e:09:2a:69, ethertype IPv4 (0x0800), length 98: 192.168.66.19 > 192.168.66.3: ICMP echo request, id 53772, seq 232, length 64

192.168.66.19 responds back to 192.168.66.3

11:09:27.198315 bc:76:4e:09:2a:69 > bc:76:4e:08:43:86, ethertype IPv4 (0x0800), length 98: 192.168.66.3 > 192.168.66.19: ICMP echo reply, id 53772, seq 232, length 64

The question, ultimate question is WHY. I don't know why, but I shown you how to see WHAT and WHERE. Which is the most pertinent way to begin reaching a why ;D

Testing CDN Consistency with bash date time curl while loop

This is a simple one. Soa customer was complaining that after 3 minutes the cache time of the file on his CDN was changing. I wanted to built a way to test the consistency of the requests. Here is how I did it.

file curl-format.txt

     timenamelookup:  %{time_namelookup}\n
       time_connect:  %{time_connect}\n
    time_appconnect:  %{time_appconnect}\n
   time_pretransfer:  %{time_pretransfer}\n
      time_redirect:  %{time_redirect}\n
 time_starttransfer:  %{time_starttransfer}\n
                    ----------\n
         time_total:  %{time_total}\n

Short, simple, and to the point;

while ((1!=0)); do date; curl -w "@curl-format.txt" -o /dev/null -s "https://www.somecdndomain.secure.raxcdn.com/img/upload/3someimage_t32337827238.jpg"; done;

Output looks like:

                    ----------
         time_total:  0.395
Tue Feb  9 09:03:28 UTC 2016
      time_namelookup:  0.151
       time_connect:  0.154
    time_appconnect:  0.332
   time_pretransfer:  0.333
      time_redirect:  0.000
 time_starttransfer:  0.338
                    ----------
         time_total:  0.351
Tue Feb  9 09:03:28 UTC 2016
      time_namelookup:  0.151
       time_connect:  0.154
    time_appconnect:  0.324
   time_pretransfer:  0.324
      time_redirect:  0.000
 time_starttransfer:  0.331
                    ----------
         time_total:  0.347
Tue Feb  9 09:03:29 UTC 2016
      time_namelookup:  0.151
       time_connect:  0.154
    time_appconnect:  0.385
   time_pretransfer:  0.385
      time_redirect:  0.000
 time_starttransfer:  0.391
                    ----------
         time_total:  0.404
Tue Feb  9 09:03:29 UTC 2016
      time_namelookup:  0.151
       time_connect:  0.155
    time_appconnect:  0.348
   time_pretransfer:  0.349
      time_redirect:  0.000
 time_starttransfer:  0.357
                    ----------
         time_total:  0.374
Tue Feb  9 09:03:30 UTC 2016
      time_namelookup:  0.151
       time_connect:  0.155
    time_appconnect:  0.408
   time_pretransfer:  0.409
      time_redirect:  0.000
 time_starttransfer:  0.417
                    ----------
         time_total:  0.433
Tue Feb  9 09:03:30 UTC 2016

pretty handy andy.

With headers

# while ((1!=0)); do date; curl -IL -w "@curl-format.txt" -s "https://www.scdn3.secure.raxcdn.com/img/upload/3_sdsdsds6a9e80df0baa19863ffb8.jpg"; sleep 180; done;

Installing Kali Linux on the Cloud

So, I want to install Kali Linux on the cloud, which… for me is good, but I highly recommend against doing this on any other cloud than your own private cloud.

katoolin

It’s actually pretty simple to get started with Kali, since it’s based on Debian and Ubuntu based distros (mainly debian from what I understand), it’s possible to install the repo’s on both Ubuntu and Debian. There’s even a really nice tool I found on techmint.com explaining the process. Here I am using wheezy 7. I’m pretty sure I could have used Debian, Jessie 8 though.

Step 1. Update repo and install git

# Update your repository
apt-get update
# Install git
apt-get install git

Step 2. Install katoolin from git

git clone https://github.com/LionSec/katoolin.git  && cp katoolin/katoolin.py /usr/bin/katoolin
# Make sure katoolin can be executed
chmod +x  /usr/bin/katoolin

# Start script to install kali
katoolin

What katoolin looks like

 $$\   $$\             $$\                         $$\ $$\           
 $$ | $$  |            $$ |                        $$ |\__|          
 $$ |$$  /  $$$$$$\  $$$$$$\    $$$$$$\   $$$$$$\  $$ |$$\ $$$$$$$\  
 $$$$$  /   \____$$\ \_$$  _|  $$  __$$\ $$  __$$\ $$ |$$ |$$  __$$\ 
 $$  $$<    $$$$$$$ |  Kali linux tools installer |$$ |$$ |$$ |  $$ |
 $$ |\$$\  $$  __$$ |  $$ |$$\ $$ |  $$ |$$ |  $$ |$$ |$$ |$$ |  $$ |
 $$ | \$$\ \$$$$$$$ |  \$$$$  |\$$$$$$  |\$$$$$$  |$$ |$$ |$$ |  $$ |
 \__|  \__| \_______|   \____/  \______/  \______/ \__|\__|\__|  \__| V1.0 


 + -- -- +=[ Author: LionSec | Homepage: www.lionsec.net
 + -- -- +=[ 330 Tools 

		

1) Add Kali repositories & Update 
2) View Categories
3) Install classicmenu indicator
4) Install Kali menu
5) Help

Press 1 to add kali repositories and update.
Then press 1 again. It just set the repositories.
Now press 2. It will update the repositories.

Just one more step!

Then type 'gohome' to return to the first menu.
Then press '2' to see selection of packages to install
Then press '0' to install all of them.

Installing goodies..

katoolin-upgrade

Using Cloud Files Versioning, Setting up from Scratch

Sooooo.. you want to use cloud-files, but, you want to have versioning? no problem! Here’s how you do it from the ground up.

Authorise yourself thru identity API

Basically… set the token by querying the identity api with username and password..

!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusername'

# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

If you were to add to this file;

echo $TOKEN

You’d see this when running it

# ./versioning.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   3991     91  0:00:01  0:00:01 --:--:--  3996
8934534DFGJdfSdsdFDS232342DFFsDDFIKJDFijTx8WMIDO8CYzbhyViGGyekRYvtw3skCYMaqIWhw8adskfjds894FGKJDFKj34i2jgidgjdf@DFsSDsd

To understand how the curl works to authorise itself with the identity API and specifically how the TOKEN is extracted from the return output and set in the script, here is the -v verbose output


# ./versioning.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* About to connect() to identity.api.rackspacecloud.com port 443 (#0)
*   Trying 72.3.138.129...
* Connected to identity.api.rackspacecloud.com (72.3.138.129) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_DHE_RSA_WITH_AES_128_CBC_SHA
* Server certificate:
* 	subject: CN=identity.api.rackspacecloud.com,OU=Domain Validated,OU=Thawte SSL123 certificate,OU=Go to https://www.thawte.com/repository/index.html,O=identity.api.rackspacecloud.com
* 	start date: Nov 14 00:00:00 2011 GMT
* 	expire date: Nov 12 23:59:59 2016 GMT
* 	common name: identity.api.rackspacecloud.com
* 	issuer: CN=Thawte DV SSL CA,OU=Domain Validated SSL,O="Thawte, Inc.",C=US
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0> POST /v2.0/tokens HTTP/1.1
> User-Agent: curl/7.29.0
> Host: identity.api.rackspacecloud.com
> Accept: */*
> Content-type: application/json
> Content-Length: 115
>
} [data not shown]
* upload completely sent off: 115 out of 115 bytes
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 02 Feb 2016 18:19:06 GMT
< Content-Type: application/json
< Content-Length: 5028
< Connection: keep-alive
< X-NewRelic-App-Data: Censored
< vary: Accept, Accept-Encoding, X-Auth-Token
< Front-End-Https: on
<
{ [data not shown]
100  5143  100  5028  100   115   3825     87  0:00:01  0:00:01 --:--:--  3826
* Connection #0 to host identity.api.rackspacecloud.com left intact
{
    "access": {
        "serviceCatalog": [
            {
                "endpoints": [
                    {
                        "internalURL": "https://snet-storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567",
                        "publicURL": "https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567",
                        "region": "LON",
                        "tenantId": "MossoCloudFS_10010101"
                    }
                ],
                "name": "cloudFiles",
                "type": "object-store"
            },
            {
   "token": {
            "RAX-AUTH:authenticatedBy": [
                "APIKEY"
            ],
            "expires": "2016-02-03T18:31:18.838Z",
            "id": "#$dfgkldfkl34klDFGDFGLK#$OFDOKGDFODJ#$OFDOGIDFOGI34ldfldfgkdo34lfdFGDKDFGDODFKDFGDFLK",
            "tenant": {
                "id": "10010101",
                "name": "10010101"
            }
        },

This is truncated, the output is larger, but basically the "token" section is stripped away at the id: part so that only the string is left, then that password is added into the TOKEN variable.

So no you understand auth.

Create the Version container

This contains all of the version changes of any file

i.e. if you overwrite a file 10 times, all 10 versions will be saved

# Create Versioning Container (Backup versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions

Note we use $TOKEN , which is basically just hte password with the X-Auth-Token Header. -H means 'send this header'. X-Auth-Token is the header name, and $TOKEN is the password we populated in the variable in the first auth section above.

Create a Current Container

This only contains the 'current' or most latest version of the file

# Create current container (latest versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" -H  "X-Versions-Location: versions"  https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current

I'm being a bit naughty here, I could make MossoCloudFS_10010101 a variable, like $CONTAINERSTORE or $CONTAINERPARENT. Or better $TENANTCONTAINER But meh. You get the idea. And learnt something.

Note importantly X-Versions-Location Header set when creating the 'current' cloud files container. It's asking to store versions of changes in current to the versions folder. Nice.

Create an object

Create the first version of an object, because its awesome

# Create an object
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

yay! My first object. I just put the number 1 in it. Not very imaginative but you get the idea. Now lets revise the object

Create a new version of the object

# Create a new version of the object (second version)
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

Create a list of the older versions of the object

# Create a list of the older versions of the object
curl -i -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

Delete the current version of an object

# Delete the current version of the object
curl -i -XDELETE -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

Pretty cool. Altogether now.

#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusername'

# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY=mycloudapikey'


# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export

# Create Versioning Container (Backup versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions

# Create current container (latest versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" -H  "X-Versions-Location: versions"  https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current


# Create an object
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

# Create a new version of the object (second version)
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

# Create a list of the older versions of the object
curl -i -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

# Delete the current version of the object

curl -i -XDELETE -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

What the output of the full script looks like:

# ./versioning.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4291     98  0:00:01  0:00:01 --:--:--  4290
HTTP/1.1 202 Accepted
Content-Length: 76
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx514bac5247924b5db247d-0056b0ecb7lon3
Date: Tue, 02 Feb 2016 17:51:51 GMT

Accepted

The request is accepted for processing.

HTTP/1.1 202 Accepted Content-Length: 76 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx7b7f42fc19b1428b97cfa-0056b0ecb8lon3 Date: Tue, 02 Feb 2016 17:51:52 GMT

Accepted

The request is accepted for processing.

HTTP/1.1 201 Created Last-Modified: Tue, 02 Feb 2016 17:51:53 GMT Content-Length: 0 Etag: c4ca4238a0b923820dcc509a6f75849b Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx2495824253374261bf52a-0056b0ecb8lon3 Date: Tue, 02 Feb 2016 17:51:53 GMT HTTP/1.1 201 Created Last-Modified: Tue, 02 Feb 2016 17:51:54 GMT Content-Length: 0 Etag: c81e728d9d4c2f636f067f89cc14862c Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx785e4a5b784243a1b8034-0056b0ecb9lon3 Date: Tue, 02 Feb 2016 17:51:54 GMT HTTP/1.1 204 No Content Content-Length: 0 X-Container-Object-Count: 2 Accept-Ranges: bytes X-Storage-Policy: Policy-0 X-Container-Bytes-Used: 2 X-Timestamp: 1454435183.80523 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx4782072371924905bc513-0056b0ecbalon3 Date: Tue, 02 Feb 2016 17:51:54 GMT

Rackspace Customer takes the time to improve my script :D

Wow. this was an awesome customer. Who was obviously capable in using the API but was struggling. So I thrown them my portable python -mjson parsing script for identity token and glance image export to cloud files. So,the customer wrote back, commenting that I’d made a mistake, specifically I had added ‘export’ instead of ‘exports’

#!/bin/bash

# Task ID - supply with command
TASK=$1
# Username used to login to control panel
USERNAME='myusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='myapikeyhere'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Requests progress of specified task
curl -X GET -H "X-Auth-Token: $TOKEN" "https://lon.images.api.rackspacecloud.com/v2/10010101/tasks/$TASK"

I just realised that the customer didn’t adapt the script to be able to pass in the image ID, on the initial export to cloud files.

Theoretically you could not only do the above but.. something like:

I just realised your script you sent checks the TASK. I just amended my initial script a bit further with your suggestion to accept myclouduser mycloudapikey and mycloudimageid

#!/bin/bash

# Username used to login to control panel
USERNAME=$1
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY=$2

# Find the image ID you'd like to make available on cloud files
IMAGEID=$3

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl https://lon.images.api.rackspacecloud.com/v2/10031542/tasks -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'"$IMAGEID"'", "receiving_swift_container": "exports"}}'

# I thought that could theoretically process the output of the above and extract $TASK_ID to check the TASK too.

Note my script isn’t perfect but the customer did well!

This way you could simply provide to the script the cloud username, password and imageid. Then when the glance export starts, the task could be extracted in the same way as the TOKEN is from identity auth.

That way you could simply run something like

./myexportvhd.sh mycloudusername mycloudapikey mycloudimageid 

Not only would it start the image export to a set export folder.
But it'd provide you an update as to the task status.

You could go further, you could then watch the tasks status with a batch script while loop, until all show a complete or failed output and then record which ones succeeded and which ones failed. You could then create a batch script off that which downloaded and rsynched to somewhere the ones that succeeded.

Or..something like that.

I love it when one of our customers makes me think really hard. Gotta love that!

Analyzing Error with Apache2

This customer is receiving an [[an error occurred while processing this directive]] error. This concerns SSI and required more analysis, but this is how I went about it.

I have taken a look at your server and I can see your running apache2 / version 2.2.16

$ curl -I somewebsite.com
HTTP/1.1 301 Moved Permanently
Date: Tue, 02 Feb 2016 17:08:44 GMT
Server: Apache/2.2.16 (Debian)
Location: somewebsite.com
Vary: Accept-Encoding
Content-Type: text/html; charset=iso-8859-1

Although I can’t be certain it looks like there is an issue with the configuration of the apache2, potentially with the moved permanently redirect, but I can’t say for certain from the error message that is given in the html page.

The web page is certainly responding now that the port is state OPEN/ACTIVE though, so we are making progress.

$ curl somewebsite.com


301 Moved Permanently

Moved Permanently

The document has moved here.


Apache/2.2.16 (Debian) Server at somewebsite.com Port 80

It looks like something happens after the redirect/moved permanently.

I would recommend visiting the site and tail’ing the access.log and error.log of your server. this is usually /var/log/apache2/access.log /var/log/apache2/error.log and /var/log/httpd/access.log and /var/log/httpd/error.log, depending on the distribution and configuration this can sometimes be different.

Run something like this:

tail -f /var/log/apache2/error.log

and then load the website up in your browser and observe the errors seen by apache2 on the server, it’s most likely the output will be far more verbose on the backend logs than in the front end error given on the web page.

For more information on SSI (server side includes), please see the apache foundation here (I haven’t discovered exactly where the code is wrong yet and am waiting for customer to give me their virtualhosts configuration/ httpd.conf)

https://httpd.apache.org/docs/2.2/howto/ssi.html
is point, if you require additional assistance we’d benefit from seeing your virtualhosts configuration from within httpd.conf.

Testing Cloud Files API calls

So a customer was having issues with cloudfuse, the virtual ‘cloud files’ hard disk. So we needed to test whether their auth is working correctly:

#!/bin/bash
# Diagnostic script by Adam Bull
# Test Cloud Files Auth
# Tuesday, 02/02/2016

# Username used to login to control panel
USERNAME=’mycloudusername’

# Find the APIKey in the ‘account settings’ part of the menu of the control panel
APIKEY=’mycloudapikey’

# This section simply retrieves and sets the TOKEN

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d ‘{ “auth”:{“RAX-KSKEY:apiKeyCredentials”: { “username”:”‘$USERNAME'”, “apiKey”: “‘$APIKEY'” }} }’ -H “Content-type: application/json” | python -mjson.tool | grep -A5 token | grep id | cut -d ‘”‘ -f4`

# Container to Upload to (container must exist)

CONTAINER=testing

LOCALFILENAME=”/root/mytest.txt”
FILENAME=”mytest.txt”

# Put command, note MossoCloufFS_customeridhere needs to be populated with the correct value. This is the number in the mycloud url when logging in.
curl -i -v -X PUT “https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_101100/$CONTAINER/$FILENAME” -T “$LOCALFILENAME” -H “X-Auth-Token: $TOKEN”