Creating Isolated Cloud Networks thru API in Rackspace Cloud

Hey! So, today I was playing around with Cloud Networking API and thought I would document the basic process of creating a network. It’s simple enough and follows the precise same logic as many of my other tutorials on cloud files, load balancers and etc.

#!/bin/bash

USERNAME='mycloudusername'
APIKEY='mycloudapikey'
ACCOUNT_NUMBER=10010101
API_ENDPOINT="https://lon.networks.api.rackspacecloud.com/v2.0/"

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-X POST -d @create-network.json "$API_ENDPOINT/networks" | python -mjson.tool

For the above code to create a new network you need to create the create-network.json markup file, it needs to look like and be in this format:

{
    "network":
    {
        "name": "Isolatednet",
        "shared": false,
        "tenant_id": "10010101"
    }
}

It’s important to note you need to define the tenant_id, thats your account number that appears in the URL when you login to mycloud control panel.

Output looks like

* Connection #0 to host lon.networks.api.rackspacecloud.com left intact
{
    "network": {
        "admin_state_up": true,
        "id": "ae36972f-5cba-4327-8bff-15d8b05dc3ee",
        "name": "Isolatednet",
        "shared": false,
        "status": "ACTIVE",
        "subnets": [],
        "tenant_id": "10045567"
    }
}

Ansible roles/glance/task/main.yml playbook for Glance API Deployment

I am working on a project at work to deploy Keystone and Glance. I’ve currently been tasked with finishing off the glance role part of the playbook with the basic setup tasks and retrieving the basic qcow2 images for the various distributions and automatically retrieving and populating the glance API image-list. Here is how I did it;

This is using an encrypted group_vars all vars.yml which contains sensitive password variables like GLANCE_DBPASS

This file shows how Glance SQL database, permissions, population and images are uploaded to glance for use by openstack compute.

glance-api

File: osan/roles/glance/tasks/main.yml

---

   - name: Create keystone database
     mysql_db:
        name: glance

   - name: Configure database user privileges
     mysql_user:
       name: glance
       host: "{{ item }}"
       password: "{{ GLANCE_DBPASS }}"
       priv: glance.*:ALL
     with_items:
       - "%"
       - localhost

#   - name: Set credentials to admin
#   command: source admin-openrc.sh

   - name: Create the Glance user service credentials
     command: openstack user create --domain default --password {{ GLANCE_PASS }} glance
     environment: admin_env
     ignore_errors: yes

   - name: Add the admin role to the glance user and service project
     command: openstack role add --project service --user glance admin
     environment: admin_env
     ignore_errors: yes

   - name: Create the glance service entity
     command: openstack service create --name glance --description "OpenStack Image service" image
     environment: admin_env
     ignore_errors: yes

   - name: Create the Image service API endpoints for glance
     command: openstack endpoint create --region RegionOne image public http://controller:9292
     environment: admin_env
     ignore_errors: yes

   - name: Create the Image service API endpoints for glance
     command: openstack endpoint create --region RegionOne image internal http://controller:9292
     environment: admin_env
     ignore_errors: yes

   - name: Create the Image service API endpoints for glance
     command: openstack endpoint create --region RegionOne image admin 'http://controller:9292'
     environment: admin_env
     ignore_errors: yes

   - name: Install Glance and Dependencies
     yum: pkg={{item}} state=installed
     with_items:
     - openstack-glance
     - python-glance
     - python-glanceclient

   - name: replace glance-api.conf file
     template: src=glance-api.conf.ansible dest=/etc/glance/glance-api.conf owner=root

   - name: replace glance-registory.conf file
     template: src=glance-registry.conf.ansible dest=/etc/glance/glance-registory.conf owner=root

   - name: Populate the Image service database
     command: su -s /bin/sh -c "glance-manage db_sync" glance

   - name: Start & Enable openstack-glance-registry.service
     service: name=openstack-glance-registry.service enabled=yes state=started

   - name: Start & Enable openstack-glance-api.service
     service: name=openstack-glance-api.service enabled=yes state=started


   - name: Retrieve CentOS 7 x86_64.qcow2
     get_url: url=http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1503.qcow2 dest=/root/CentOS-7-x86_64-GenericCloud-1503.qcow2 mode=0600

   - name: Populate Glance DB with CentOS 7 qcow2 Image
     command:  glance image-create --name "centos7-x86_x64" --file /root/CentOS-7-x86_64-GenericCloud-1503.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Cirros qcow2 Image
     get_url: url=http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img dest=/root/cirros-0.3.4-x86_64-disk.img mode=0600

   - name: Import Cirros qcow Image to Glance
     command:  glance image-create --name "cirros-0.3.4_x86_64" --file /root/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Ubuntu 14.04 Trusty Tahr qcow2 Image
     get_url: url=http://cloud-images.ubuntu.com/releases/14.04/release-20140416.1/ubuntu-14.04-server-cloudimg-amd64-disk1.img dest=/root/ubuntu-14.04-server-cloudimg-amd64-disk1.img mode=0600

   - name: Import Ubuntu 14.04 Trusty Tahr to Glance
     command: glance image-create --name "ubuntu-14.04-lts-trusty-tahr-amd64" --file /root/ubuntu-14.04-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Fedora 23 qcow2 Image
     get_url: url=https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2 dest=/root/Fedora-Cloud-Base-23-20151030.x86_64.qcow2 mode=0600

   - name: Import Fedora 23 qcow2 Image to Glance
     command: glance image-create --name "fedora-23-amd64" --file /root/Fedora-Cloud-Base-23-20151030.x86_64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Debian 8 amd64 qcow2 Image
     get_url: url=http://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2 dest=/root/debian-8.2.0-openstack-amd64.qcow2 mode=0600

   - name: Import Debian 8 to Glance
     command: glance image-create --name "debian8-2-0-amd64" --file /root/debian-8.2.0-openstack-amd64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve OpenSuSE 13.2 Guest Qcow2 Image
     get_url: url=http://download.opensuse.org/repositories/Cloud:/Images:/openSUSE_13.2/images/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 dest=/root/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 mode=0600

   - name: Import OpenSuSE 13.2 to Glance
     command: glance image-create --name "opensuse-13-2-amd64" --file /root/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

The above is in yaml format which is really tricky so what your syntax when using it. It is VERY sensitive.

After this runs we are left with a nice glance image-list output. Glance is ready for compute to use the qcow2 images we associated using the openstack Glance API.

+--------------------------------------+------------------------------------+
| ID                                   | Name                               |
+--------------------------------------+------------------------------------+
| f58aaed4-fda7-41b3-a0c9-e99d6c956afd | centos7-x86_x64                    |
| b4c7224b-0e0d-475c-880c-f48e1c0608b2 | cirros-0.3.4_x86_64                |
| 975accd5-d9bc-4485-86df-88e97e7f3237 | debian8-2-0-amd64                  |
| 41e7949c-3e17-434f-8008-4551673da496 | fedora-23-amd64                    |
| 092338df-6e8e-471b-93ff-07b339510636 | opensuse-13-2-amd64                |
| ae707804-3dd5-474f-ab8d-3d6e855e420d | ubuntu-14.04-lts-trusty-tahr-amd64 |
+--------------------------------------+------------------------------------+

Resizing a Rackspace Performance Server

It’s possible for the customer to do this thru the API, but it is without express warantee. It’s not possible to resize performance servers thru the mycloud control panel, so, to do it you will need to use curl API, or what I like to use, supernova wrapper for nova or nova. It’s quite simple really;

The below example is how to resize a performance server to 4 gigs (this was from 2 gigs)

supernova customer resize --poll uuidgoeshere performance1-4

Using Python with nova-client to list servers

A customer came to me today complaining about his code not working. He’d forgot to include the ‘account-number’, also refered to as the project_id in openstack. Without it, your going to get HTTP 405, i.e. MethodNotAllowed: Method Not Allowed (HTTP 405)

from novaclient import client

nova = client.Client ("2" ,"username", "password", "account-number", "https://lon.identity.api.rackspacecloud.com/v2.0")

list = nova.servers.list()

print list

This does what it says on the tin, queries the API using nova python module to extract server list.

Deploying Devstack successfully in CentOS 7

So, do you want to setup your own openstack infrastructure? With Cinder, Nova, nova API, keystone and the such? That’s easy enough. Here is how to do it.

Step 1. Deploy CentOS7, any basic install should be fine. I deployed using the Rackspace cloud server 8Gigs standard instance type. (standard install should be fine!)

Step 2. Add stack user

adduser stack

Step 3. Add stack to sudoers wheel group, ensuring sudo is there

yum install -y sudo
echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

Step 4. Modify /etc/passwd so that the home directory for stack is /opt/stack . It needs this. And chown

vi /etc/passwd
# make sure home directory for stack is /opt/stack (thats all!)
mkdir /opt/stack
chown -R stack:stack /opt/stack

Step 5. Clone Devstack from git

sudo yum install -y git
su stack
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack

Step 6. Cp base config sample file

cp samples/local.conf .

Step 7. Deploy stack

./stack.sh

Booting an Image in specific cell/region

This particular oneliner uses NOVA API to boot an image with the id=9876fa2-99df-4be3-989f-eec1e8c08afd and the flavor=general purpose 4GB RAM and the hint ensures that the server reaches the correct cell and hypervisor host.

supernova customer boot --image 9876fa2-99df-4be3-989f-eec1e8c08afd --flavor general1-4 --hint target_cell='lon!z0001' --hint 0z0ne_target_host=c-10-0-12-119 myservername

List all Cloud Server Details thru the API

Well, this one is a bit cheeky because I borrowed it from a colleague of mine David Coon. Thanks David, I appreciate your assistance!

#!/bin/bash


auth() {
    read -p "What is your Account Number: " ddi
    read -p "Whats your username:" username    
    read -p "Whats your APIkey:" APIkey
    read -p "Which Datacenter are your servers in? " dc
}

token() {
    
    token=`curl -s https://identity.api.rackspacecloud.com/v2.0/tokens -X POST \
    -d '{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"'$username'", "apiKey":"'$APIkey'"}}}' \
    -H "Content-Type: application/json" | python -m json.tool  | sed -n '/expires/{n;p;}' |sed -e 's/^.*"id": "\(.*\)",/\1/'`
    echo "Your API Token is ---->  $token"
}

listservers() {
    curl -s -H "X-Auth-Token: $token" "https://$dc.servers.api.rackspacecloud.com/v2/$ddi/servers" | python -m json.tool
}

getservers() {
    read -p "What is the server id?" id
    curl -s -H "X-Auth-Token: $token" "https://$dc.servers.api.rackspacecloud.com/v2/$ddi/servers/$id" | python -m json.tool
}

auth
token
listservers
getservers

Deleting All the Files in a Cloud Container

Hey. So if only I had a cake for every customer that asked if we could delete all of their cloud files in a single container for them (i’d be really really really fat so maybe that is a bad idea). A dollar though, now there’s a thought.

On that note, here is a dollar. Probably the best dollar you’ll see today. You could probably do this with php, bash or swiftly, but doing it *THIS* way is also awesome, and I learnt (although some might say learned) something. Here is how I did it. I should also importantly thank Matt Dorn for his contributions to this article. Without him this wouldn’t exist.

Step 1. Install Python, pip

yum install python pip
apt-get install python pip

Step 2. Install Pyrax (rackspace Python Openstack Library)

pip install pyrax

Step 3. Install Libevent

curl -L -O https://github.com/downloads/libevent/libevent/libevent-2.0.21-stable.tar.gz
tar xzf libevent-2.0.21-stable.tar.gz
cd libevent-2.0.21-stable
./configure --prefix="$VIRTUAL_ENV"
make && make install
cd $VIRTUAL_ENV/..

Step 4. Install Greenlet and Gevent


pip install greenlet
pip install gevent

Step 5. Check gevent library loading in Python Shell

python
import gevent

If nothing comes back, the gevent lib works OK.

Step 6. Create the code to delete all the files

#!/usr/bin/python
# -*- coding: utf-8 -*-
from gevent import monkey
from gevent.pool import Pool
from gevent import Timeout
monkey.patch_all()
import pyrax

if __name__ == '__main__':
    pool = Pool(100)
pyrax.set_setting('identity_type', 'rackspace')
pyrax.set_setting('verify_ssl', False)
# Rackspace Credentials Go here, Region LON, username: mycloudusername apikey: myrackspaceapikey. 
pyrax.set_setting('region', 'LON')
pyrax.set_credentials('mycloudusername', 'myrackspaceapikey')

cf = pyrax.cloudfiles
# Remember to set the container correctly (which container to delete all files within?)
container = cf.get_container('testing')
objects = container.get_objects(full_listing=True)


def delete_object(obj):

# added timeout of 5 seconds just in case

    with Timeout(5, False):
        try:
            obj.delete()
        except:
            pass


for obj in objects:
    pool.spawn(delete_object, obj)
pool.join()

It’s well worth noting that this can also be used to list all of the objects as well, but that is something for later…

Step 7. Execute (not me the script!)

The timeout can be adjusted. And the script can be run several times to ensure any missed files are retried to be deleted.

Resetting Meta data of a RAX Rackspace Xen Server

At work we had some customers complaining of metadata not being removed on their servers.


nova --os-username username --os-password apigoeshere meta uuidgoeshere delete rax:reboot_window

It was pretty simple to do as a one liner right.

But imagine we have a list.txt full of 100 servers that need clearing for an individual customer, that would be a nightmare to do manually like above. so we can do it like:

for server in $(cat list.txt); do nova --os-username username --os-password apikeygoeshere meta $server delete rax:reboot_window; done

Now that is pretty cool. And saved me and my colleagues a lot of time.

Using configdrive cloud-config to execute commands post server creation

A lot of customers might want to setup automation, for installing common packages and making configurations for vanilla images. One way to provide that automation is to use configdrive which allows you to execute commands post server creation, as well as to install certain packages that are required.

The good thing about using this is you can get a server up and running with a single line of automation, and of course your configuration file (which contains all the automation). Here is the steps you need to do it, and it is actually really rather very simple!

Step 1. Create Automation File .cloud-config

#cloud-config

packages:

 - apache2
 - php5
 - php5-mysql
 - mysql-server

runcmd:

 - wget http://wordpress.org/latest.tar.gz -P /tmp/
 - tar -zxf /tmp/latest.tar.gz -C /var/www/
 - mysql -e "create database wordpress; create user 'wpuser'@'localhost' identified by 'changemetoo'; grant all privileges on wordpress . \* to 'wpuser'@'localhost'; flush privileges;"
 - mysql -e "drop database test; drop user 'test'@'localhost'; flush privileges;"
 - mysqladmin -u root password 'changeme'

Install apache2, php5, php-mysql, mysqlserver, download wordpress to /tmp and then extract it into main /var/www folder. Create the wordpress database and user name.

Step 2: Create server using cloud-config in Supernova via the Rackspace API
(not hard! easy!)

supernova customer boot --config-drive=true --flavor performance1-1 --image 09de0a66-3156-48b4-90a5-1cf25a905207 --user-data cloud-config testing-configdrive


+--------------------------------------+-------------------------------------------------------------------------------+
| Property                             | Value                                                                         |
+--------------------------------------+-------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                        |
| OS-EXT-STS:power_state               | 0                                                                             |
| OS-EXT-STS:task_state                | scheduling                                                                    |
| OS-EXT-STS:vm_state                  | building                                                                      |
| RAX-PUBLIC-IP-ZONE-ID:publicIPZoneId |                                                                               |
| accessIPv4                           |                                                                               |
| accessIPv6                           |                                                                               |
| adminPass                            | SECUREPASSWORDHERE                                                            |
| config_drive                         | True                                                                          |
| created                              | 2015-10-20T11:10:23Z                                                          |
| flavor                               | 1 GB Performance (performance1-1)                                             |
| hostId                               |                                                                               |
| id                                   | ef084d0f-70cc-4366-b348-daf987909899                                          |
| image                                | Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM) (09de0a66-3156-48b4-90a5-1cf25a905207) |
| key_name                             | -                                                                             |
| metadata                             | {}                                                                            |
| name                                 | testing-configdrive                                                           |
| progress                             | 0                                                                             |
| status                               | BUILD                                                                         |
| tenant_id                            | 10000000                                                                      |
| updated                              | 2015-10-20T11:10:24Z                                                          |
| user_id                              | 05b18e859cad42bb9a5a35ad0a6fba2f                                              |
+--------------------------------------+-------------------------------------------------------------------------------+

In my case my supernova was setup already, however I have another article on how to setup supernova on this site, just take a look there for how to install it. MY supernova configuration looks like (with the API KEY removed ofcourse!)

[customer]
OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/
OS_AUTH_SYSTEM=rackspace
#OS_COMPUTE_API_VERSION=1.1
NOVA_RAX_AUTH=1
OS_REGION_NAME=LON
NOVA_SERVICE_NAME=cloudServersOpenStack
OS_PASSWORD=90bb3pd0a7MYMOCKAPIKEYc419572678abba136a2
OS_USERNAME=mycloudusername
OS_TENANT_NAME=100000

OS_TENANT_NAME is your customer number, take it from the url in mycloud.rackspace.com after logging on. OS_PASSWORD is your API KEY, get it from the account settings url in mycloud.rackspace.co.uk, and your OS_USERNAME, that is your username that you use to login to the Rackspace mycloud control panel. Simples!

Step 3: Confirm your server built as expected

root@testing-configdrive:~# ls /tmp
latest.tar.gz

root@testing-configdrive:~# ls /var/www/wordpress
index.php    readme.html      wp-admin            wp-comments-post.php  wp-content   wp-includes        wp-load.php   wp-mail.php      wp-signup.php     xmlrpc.php
license.txt  wp-activate.php  wp-blog-header.php  wp-config-sample.php  wp-cron.php  wp-links-opml.php  wp-login.php  wp-settings.php  wp-trackback.php

In my case, I noticed that everything went fine and ‘wordpress’ installed to /var/www just fine. But what if I wanted wordpress www dir configured to html by default? That’s pretty easy. It’s just an extra.

mv /var/www/html /var/www/html_old
mv /var/www/wordpress /var/www/html

So lets add that to our automation script:

#cloud-config

packages:

 - apache2
 - php5
 - php5-mysql
 - mysql-server

runcmd:

 - wget http://wordpress.org/latest.tar.gz -P /tmp/
 - tar -zxf /tmp/latest.tar.gz -C /var/www/; mv /var/www/html /var/www/html_old; mv /var/www/wordpress /var/www/html
 - mysql -e "create database wordpress; create user 'wpuser'@'localhost' identified by 'changemetoo'; grant all privileges on wordpress . \* to 'wpuser'@'localhost'; flush privileges;"
 - mysql -e "drop database test; drop user 'test'@'localhost'; flush privileges;"
 - mysqladmin -u root password 'changeme'

Job done. Just a case of re-running the command now:

supernova customer boot --config-drive=true --flavor performance1-1 --image 09de0a66-3156-48b4-90a5-1cf25a905207 --user-data cloud-config testing-configdrive

And then checking that our wordpress website loads correctly without any additional configuration or having to login to the machine! Not bad automation thar.

I could have quite easily achieved something like this by using the API directly. No supernova and no filesystem. Just the raw command! Yeah that’d be better than not bad!

Creating Post BUILD Automation thru API via CURL

Here’s how to do it.

Step 1. Prepare your execution script by converting it to BASE_64 character encoding

Unencoded Script:

#cloud-config

packages:

 - apache2
 - php5
 - php5-mysql
 - mysql-server

runcmd:

 - wget http://wordpress.org/latest.tar.gz -P /tmp/
 - tar -zxf /tmp/latest.tar.gz -C /var/www/; mv /var/www/html /var/www/html_old; mv /var/www/wordpress /var/www/html
 - mysql -e "create database wordpress; create user 'wpuser'@'localhost' identified by 'changemetoo'; grant all privileges on wordpress . \* to 'wpuser'@'localhost'; flush privileges;"
 - mysql -e "drop database test; drop user 'test'@'localhost'; flush privileges;"
 - mysqladmin -u root password 'changeme'

Encoded Script:

I2Nsb3VkLWNvbmZpZw0KDQpwYWNrYWdlczoNCg0KIC0gYXBhY2hlMg0KIC0gcGhwNQ0KIC0gcGhwNS1teXNxbA0KIC0gbXlzcWwtc2VydmVyDQoNCnJ1bmNtZDoNCg0KIC0gd2dldCBodHRwOi8vd29yZHByZXNzLm9yZy9sYXRlc3QudGFyLmd6IC1QIC90bXAvDQogLSB0YXIgLXp4ZiAvdG1wL2xhdGVzdC50YXIuZ3ogLUMgL3Zhci93d3cvIDsgbXYgL3Zhci93d3cvaHRtbCAvdmFyL3d3dy9odG1sX29sZDsgbXYgL3Zhci93d3cvd29yZHByZXNzIC92YXIvd3d3L2h0bWwNCiAtIG15c3FsIC1lICJjcmVhdGUgZGF0YWJhc2Ugd29yZHByZXNzOyBjcmVhdGUgdXNlciAnd3B1c2VyJ0AnbG9jYWxob3N0JyBpZGVudGlmaWVkIGJ5ICdjaGFuZ2VtZXRvbyc7IGdyYW50IGFsbCBwcml2aWxlZ2VzIG9uIHdvcmRwcmVzcyAuIFwqIHRvICd3cHVzZXInQCdsb2NhbGhvc3QnOyBmbHVzaCBwcml2aWxlZ2VzOyINCiAtIG15c3FsIC1lICJkcm9wIGRhdGFiYXNlIHRlc3Q7IGRyb3AgdXNlciAndGVzdCdAJ2xvY2FsaG9zdCc7IGZsdXNoIHByaXZpbGVnZXM7Ig0KIC0gbXlzcWxhZG1pbiAtdSByb290IHBhc3N3b3JkICdjaGFuZ2VtZSc=

Step 2: Get Authorization token from identity API endpoint

Command:


$ curl -s https://identity.api.rackspacecloud.com/v2.0/tokens -X 'POST'        -d '{"auth":{"passwordCredentials":{"username":"adambull", "password":"superBRAIN%!7912105!"}}}'        -H "Content-Type: application/json"

Response:

{"access":{"token":{"id":"AAD4gu67KlOPQeRSTJVC_8MLrTomBCxN6HdmVhlI4y9SiOa-h-Ytnlls2dAJo7wa60E9nQ9Se0uHxgJuHayVPEssmIm--MOCKTOKEN_EXAMPLE-0Wv5n0ZY0A","expires":"2015-10-21T15:06:44.577Z"

It’s also possible to use your API Key to retrieve the TOKEN ID used by API:
(if you don’t like using your control panel password!)

curl -s https://identity.api.rackspacecloud.com/v2.0/tokens -X 'POST' \
       -d '{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"yourUserName", "apiKey":"yourApiKey"}}}' \
       -H "Content-Type: application/json" | python -m json.tool

Step 3: Construct Script to Execute Command directly thru API


#!/bin/sh

# Your Rackspace ACCOUNT DDI, look for a number like below when you login to the Rackspace mycloud controlpanel
account='10000000'

# Using the token that was returned to us in step 2
token="AAD4gu6FH-KoLCKiPWpqHONkCqGJ0YiDuO6yvQG4J1jRSjcQoZSqRK94u0jaYv5BMOCKTOKENpMsI3NEkjNqApipi0Lr2MFLjw"

# London Datacentre Endpoint, could by SYD, IAD, ORD, DFW etc
curl -v https://lon.servers.api.rackspacecloud.com/v2/$account/servers \
       -X POST \
       -H "X-Auth-Project-Id: $account" \
       -H "Content-Type: application/json" \
       -H "Accept: application/json" \
       -H "X-Auth-Token: $token" \
       -d '{"server": {"name": "testing-cloud-init-api", "imageRef": "09de0a66-3156-48b4-90a5-1cf25a905207", "flavorRef": "general1-1", "config_drive": "true", "user_data": "I2Nsb3VkLWNvbmZpZw0KDQpwYWNrYWdlczoNCg0KIC0gYXBhY2hlMg0KIC0gcGhwNQ0KIC0gcGhwNS1teXNxbA0KIC0gbXlzcWwtc2VydmVyDQoNCnJ1bmNtZDoNCg0KIC0gd2dldCBodHRwOi8vd29yZHByZXNzLm9yZy9sYXRlc3QudGFyLmd6IC1QIC90bXAvDQogLSB0YXIgLXp4ZiAvdG1wL2xhdGVzdC50YXIuZ3ogLUMgL3Zhci93d3cvIDsgbXYgL3Zhci93d3cvaHRtbCAvdmFyL3d3dy9odG1sX29sZDsgbXYgL3Zhci93d3cvd29yZHByZXNzIC92YXIvd3d3L2h0bWwNCiAtIG15c3FsIC1lICJjcmVhdGUgZGF0YWJhc2Ugd29yZHByZXNzOyBjcmVhdGUgdXNlciAnd3B1c2VyJ0AnbG9jYWxob3N0JyBpZGVudGlmaWVkIGJ5ICdjaGFuZ2VtZXRvbyc7IGdyYW50IGFsbCBwcml2aWxlZ2VzIG9uIHdvcmRwcmVzcyAuIFwqIHRvICd3cHVzZXInQCdsb2NhbGhvc3QnOyBmbHVzaCBwcml2aWxlZ2VzOyINCiAtIG15c3FsIC1lICJkcm9wIGRhdGFiYXNlIHRlc3Q7IGRyb3AgdXNlciAndGVzdCdAJ2xvY2FsaG9zdCc7IGZsdXNoIHByaXZpbGVnZXM7Ig0KIC0gbXlzcWxhZG1pbiAtdSByb290IHBhc3N3b3JkICdjaGFuZ2VtZSc="}}' \
      | python -m json.tool

Zomg what does this mean?

X-Auth-Token: is just the header that is sent to authorise your request. You got the token using your mycloud username and password, or mycloud username and API key in step 2.
ImageRef: this is just the ID assigned to the base image of Ubuntu LTS 14.04. Take a look below at all the different images you can use (and the image id of each):

$ supernova customer image-list

| ade87903-9d82-4584-9cc1-204870011de0 | Arch 2015.7 (PVHVM)                                          | ACTIVE |                                      |
| fdaf64c7-d9f3-446c-bd7c-70349305ae91 | CentOS 5 (PV)                                                | ACTIVE |                                      |
| 21612eaf-a350-4047-b06f-6bb8a8a7bd99 | CentOS 6 (PV)                                                | ACTIVE |                                      |
| fabe045f-43f8-4991-9e6c-5cabd617538c | CentOS 6 (PVHVM)                                             | ACTIVE |                                      |
| 6595f1b7-e825-4bd2-addc-c7b1c803a37f | CentOS 7 (PVHVM)                                             | ACTIVE |                                      |
| 2c12f6da-8540-40bc-b974-9a72040173e0 | CoreOS (Alpha)                                               | ACTIVE |                                      |
| 8dc7d5d8-4ad4-41b6-acf1-958dfeadcb17 | CoreOS (Beta)                                                | ACTIVE |                                      |
| 415ca2e6-df92-44e6-ba95-8ee36b436b24 | CoreOS (Stable)                                              | ACTIVE |                                      |
| eaaf94d8-55a6-4bfa-b0a8-473febb012dc | Debian 7 (Wheezy) (PVHVM)                                    | ACTIVE |                                      |
| c3aacaf9-8d1e-4d41-bb47-045fbc392a1c | Debian 8 (Jessie) (PVHVM)                                    | ACTIVE |                                      |
| 081a8b12-515c-41c9-8ce4-13139e1904f7 | Debian Testing (Stretch) (PVHVM)                             | ACTIVE |                                      |
| 498c59a0-3c26-4357-92c0-dd938baca3db | Debian Unstable (Sid) (PVHVM)                                | ACTIVE |                                      |
| 46975098-7799-4e72-8ae0-d6ef9d2d26a1 | Fedora 21 (PVHVM)                                            | ACTIVE |                                      |
| 0976b31e-f6d7-4d74-81e9-007fca25067e | Fedora 22 (PVHVM)                                            | ACTIVE |                                      |
| 7a1cf8de-7721-4d56-900b-1e65def2ada5 | FreeBSD 10 (PVHVM)                                           | ACTIVE |                                      |
| 7451d607-426d-416f-8d29-97e57f6f3ad5 | Gentoo 15.3 (PVHVM)                                          | ACTIVE |                                      |
| 79436148-753f-41b7-aee9-5acbde16582c | OpenSUSE 13.2 (PVHVM)                                        | ACTIVE |                                      |
| 05dd965d-84ce-451b-9ca1-83a134e523c3 | Red Hat Enterprise Linux 5 (PV)                              | ACTIVE |                                      |
| 783f71f4-d2d8-4d38-b2e1-8c916de79a38 | Red Hat Enterprise Linux 6 (PV)                              | ACTIVE |                                      |
| 5176fde9-e9d6-4611-9069-1eecd55df440 | Red Hat Enterprise Linux 6 (PVHVM)                           | ACTIVE |                                      |
| 92f8a8b8-6019-4c27-949b-cf9910b84ffb | Red Hat Enterprise Linux 7 (PVHVM)                           | ACTIVE |                                      |
| 36076d08-3e8b-4436-9253-7a8868e4f4d7 | Scientific Linux 6 (PVHVM)                                   | ACTIVE |                                      |
| 6118e449-3149-475f-bcbb-99d204cedd56 | Scientific Linux 7 (PVHVM)                                   | ACTIVE |                                      |
| 656e65f7-6441-46e8-978d-0d39beaaf559 | Ubuntu 12.04 LTS (Precise Pangolin) (PV)                     | ACTIVE |                                      |
| 973775ab-0653-4ef8-a571-7a2777787735 | Ubuntu 12.04 LTS (Precise Pangolin) (PVHVM)                  | ACTIVE |                                      |
| 5ed162cc-b4eb-4371-b24a-a0ae73376c73 | Ubuntu 14.04 LTS (Trusty Tahr) (PV)                          | ACTIVE |                                      |
| ***09de0a66-3156-48b4-90a5-1cf25a905207*** | Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)                       | ACTIVE |                                      |
| 658a7d3b-4c58-4e29-b339-2509cca0de10 | Ubuntu 15.04 (Vivid Vervet) (PVHVM)                          | ACTIVE |                                      |
| faad95b7-396d-483e-b4ae-77afec7e7097 | Vyatta Network OS 6.7R9                                      | ACTIVE |                                      |
| ee71e392-12b0-4050-b097-8f75b4071831 | Windows Server 2008 R2 SP1                                   | ACTIVE |                                      |
| 5707f82f-43f0-41e0-8e51-bfb597852825 | Windows Server 2008 R2 SP1 + SQL Server 2008 R2 SP2 Standard | ACTIVE |                                      |
| b684e5a0-11a8-433e-a4b8-046137783e1b | Windows Server 2008 R2 SP1 + SQL Server 2008 R2 SP2 Web      | ACTIVE |                                      |
| d16fd3df-3b24-49ee-ae6a-317f450006e7 | Windows Server 2012                                          | ACTIVE |                                      |
| f495b41d-07e1-44c5-a3e8-65c4412a7eb8 | Windows Server 2012 + SQL Server 2012 SP1 Standard           | ACTIVE |                                      |

flavorRef: is simply referring to what server type to start up, it’s pretty darn simple

$ supernova lon flavor-list

+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID               | Name                    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+
| 2                | 512MB Standard Instance | 512       | 20   | 0         |      | 1     |             | N/A       |
| 3                | 1GB Standard Instance   | 1024      | 40   | 0         |      | 1     |             | N/A       |
| 4                | 2GB Standard Instance   | 2048      | 80   | 0         |      | 2     |             | N/A       |
| 5                | 4GB Standard Instance   | 4096      | 160  | 0         |      | 2     |             | N/A       |
| 6                | 8GB Standard Instance   | 8192      | 320  | 0         |      | 4     |             | N/A       |
| 7                | 15GB Standard Instance  | 15360     | 620  | 0         |      | 6     |             | N/A       |
| 8                | 30GB Standard Instance  | 30720     | 1200 | 0         |      | 8     |             | N/A       |
| compute1-15      | 15 GB Compute v1        | 15360     | 0    | 0         |      | 8     |             | N/A       |
| compute1-30      | 30 GB Compute v1        | 30720     | 0    | 0         |      | 16    |             | N/A       |
| compute1-4       | 3.75 GB Compute v1      | 3840      | 0    | 0         |      | 2     |             | N/A       |
| compute1-60      | 60 GB Compute v1        | 61440     | 0    | 0         |      | 32    |             | N/A       |
| compute1-8       | 7.5 GB Compute v1       | 7680      | 0    | 0         |      | 4     |             | N/A       |
| general1-1       | 1 GB General Purpose v1 | 1024      | 20   | 0         |      | 1     |             | N/A       |
| general1-2       | 2 GB General Purpose v1 | 2048      | 40   | 0         |      | 2     |             | N/A       |
| general1-4       | 4 GB General Purpose v1 | 4096      | 80   | 0         |      | 4     |             | N/A       |
| general1-8       | 8 GB General Purpose v1 | 8192      | 160  | 0         |      | 8     |             | N/A       |
| io1-120          | 120 GB I/O v1           | 122880    | 40   | 1200      |      | 32    |             | N/A       |
| io1-15           | 15 GB I/O v1            | 15360     | 40   | 150       |      | 4     |             | N/A       |
| io1-30           | 30 GB I/O v1            | 30720     | 40   | 300       |      | 8     |             | N/A       |
| io1-60           | 60 GB I/O v1            | 61440     | 40   | 600       |      | 16    |             | N/A       |
| io1-90           | 90 GB I/O v1            | 92160     | 40   | 900       |      | 24    |             | N/A       |
| memory1-120      | 120 GB Memory v1        | 122880    | 0    | 0         |      | 16    |             | N/A       |
| memory1-15       | 15 GB Memory v1         | 15360     | 0    | 0         |      | 2     |             | N/A       |
| memory1-240      | 240 GB Memory v1        | 245760    | 0    | 0         |      | 32    |             | N/A       |
| memory1-30       | 30 GB Memory v1         | 30720     | 0    | 0         |      | 4     |             | N/A       |
| memory1-60       | 60 GB Memory v1         | 61440     | 0    | 0         |      | 8     |             | N/A       |
| performance1-1   | 1 GB Performance        | 1024      | 20   | 0         |      | 1     |             | N/A       |
| performance1-2   | 2 GB Performance        | 2048      | 40   | 20        |      | 2     |             | N/A       |
| performance1-4   | 4 GB Performance        | 4096      | 40   | 40        |      | 4     |             | N/A       |
| performance1-8   | 8 GB Performance        | 8192      | 40   | 80        |      | 8     |             | N/A       |
| performance2-120 | 120 GB Performance      | 122880    | 40   | 1200      |      | 32    |             | N/A       |
| performance2-15  | 15 GB Performance       | 15360     | 40   | 150       |      | 4     |             | N/A       |
| performance2-30  | 30 GB Performance       | 30720     | 40   | 300       |      | 8     |             | N/A       |
| performance2-60  | 60 GB Performance       | 61440     | 40   | 600       |      | 16    |             | N/A       |
| performance2-90  | 90 GB Performance       | 92160     | 40   | 900       |      | 24    |             | N/A       |
+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+