Using Rackspace Cloud Block Storage from Python

Rackspace announced the availability of their cloud block storage offering for their OpenStack based public cloud last week.  This is intended to provide some parity with Amazon’s Elastic Block Storage (EBS) service with some additional nice features, not the least of which is being able to back your volumes with an SSD.  But while there is a ton of information out there on using the boto library to work with EC2, there’s a lot less on working with the Rackspace Cloud Servers from python.

To start with, you’ll need to install the python-cinderclient package using pip. Once you have it installed, you’ll want to log into your Rackspace Cloud console and get your username, your API key and your Cloud Account Number. For convenience sake, I put these in environment variables OS_USERNAME, OS_PASSWORD and OS_TENANT_NAME respectively so that I can access them without having to cut and paste all the time.

Screenshot of Rackspace Cloud API key screen

You can find your api keys here after logging into the Rackspace Cloud console

Once you have this, we can move on to getting a connection to the API so that we can then perform all of the calls that we want.

>>> import cinderclient.client as cinder
>>> import os
>>> if "CINDER_RAX_AUTH" not in os.environ:
...   os.environ["CINDER_RAX_AUTH"] = "1"
>>> conn = cinder.Client("1", os.environ["OS_USERNAME"],
...    os.environ["OS_PASSWORD"],
...    tenant_id=os.environ["OS_TENANT_NAME"],
...    region_name="DFW",
...    auth_url="https://identity.api.rackspacecloud.com/v2.0/",
...    )

A few things here can use some explanation. The first is the CINDER_RAX_AUTH piece. While python-novaclient supports plugins to authenticate with different OpenStack providers, this hasn’t yet been added to python-cinderclient yet. So we instead take advantage of a little hack that lets us talk the Rackspace-specific authentication protocol. The other is the region name — our servers are located in the Dallas data center, so we choose DFW but if you are in Chicago or London you’ll want ORD or LON.

Now that we have a connection, let’s create a volume and attach it to one of our instances.

>>> volume = conn.volumes.create(100, volume_type="SATA")
>>> volume.attach(instance_id, "/dev/xvdd")

This will create a 100 GB volume using SATA storage underneath and then attach it to the instance_id you provide as /dev/xvdd. You can specify sizes for your volume between 100 GB and 1024 GB, either SATA or SSD as the volume type and the block device which the volume as presented as to your guest instance.

If you want to look at the volumes you have in your account, there is a straight-forward list method which also returns volumes objects that you can inspect for various attributes

>>> volumes = conn.volumes.list()
>>> print volumes[0].id, volumes[0].size, volumes[0].attachment

The final thing that we’ll look at is creating a snapshot of your volumes.

>>> snap = conn.volume_snapshots.create(volume.id, force=True)

This will give you a snapshot of a given volume. Note that you have to use force=True if the volume is attached to an instance. As with snapshots of EBS at EC2, you can then later create a volume based on a given snapshot to share common data across instances or to do backup and recovery.

Now let’s clean up the resources we’ve created to avoid being charged for them without using them.

>>> snap.delete()
>>> volume.detach()
>>> volume.delete()

This will delete the snapshot we created, detach it from our instance and then delete the volume. Note that you want to be careful as these are destructive operations so if you have data stored on them, you will lose it!

Hope this helps if you’re just getting started automating your usage of block storage at Rackspace Cloud!

Comments are closed.