Obscene Redundancy utilizing Rackspace Cloud Files

So, you may have noticed over the past weeks and months I have been a little bit quieter about the articles I have been writing. Mainly because I’ve been working on a new github project, which, although simple, and lightweight is actually really rather outrageously powerful.


Imagine being able to take 15+ redundant replica copies of your files, across 5 or 6 different datacentres. Rackspace Cloud Files API powered, but also with a lot of the flexibility of Bourne Again Shell (BASH).

This was actually quite a neat achievement and I am pleased with the results. There are still some limitations of this redundant replica application, and there are a few bugs, but it is a great proof of concept which shows what you can do with the API both quickly and cheaply (ish). Using filesystems as a service will be the future with some further innovation on the world wide network infrastructure, and it would only take a small breakthrough to rapidly alter the way that OS and machines boot/backup.

If you want to see the project and read the source code before I lay out and describe/explain the entire process of writing this software as well as how to deploy it with cron on linux, then you need wait no longer. Revision 1 alpha is now tested, ready and working in 5 different datacentres.

You can actually toggle which datacentres you wish to utilize as well, it is slightly flexible. The only important consideration here is to understand that there are some limitations such as a lack of de-duping, and this uses tar’s and swiftly, instead of directly querying the API. Since directly uploading thru the API a tar file is relatively simple, I will probably implement it like that as I have before and get rid of swiftly in future iterations, however such a project is really ideal for learning more about BASH , CRON, API and programmatic automation of and sequential filesystems utilizing functional programming and division of labour between workers,


Test it (please note it will be a little bit buggy on different environments and there is no instructions yet)

git clone https://github.com/aziouk/obsceneredundancy

Cheers &

Best wishes,

3 thoughts on “Obscene Redundancy utilizing Rackspace Cloud Files

    • Hey Blake,

      Thanks man, it’s always good to have feedback from you chap. Yeah the README and proper documentation is coming, I only just finished writing this code yesterday so I have literally no documentation yet.

      In essence though it’s quite simple to configure, the config.conf is the main file that local and remote containers and directories are defined (i.e. backup src and backup dst). All the swiftly-configs are in the folder swiftly-configs and are specifically called by the script in multidcbackup.sh.

      swiftly-configs (none are provided with the repo as you can see) can be auto-generated with the provided generation script for swiftly. It will ask for the user/pass of the endpoints you have and autogen the swiftly config files in the right place. Make sure you toggle the [region]_TO_BACKUP vars in the config.conf, as the autogenerate script looks to see if they are toggled before making configs. Also the multidcbackup script looks at the toggled variables and if they are not switched backup on those endpoints will be skipped. It’s quite nice and I think could form a part of a much, much larger project. ‘sbaas’ seamless backup as a service or similar. I’m trying to keep it lightweight and add as much functionality as I can without making it too bloaty.

      To install all the dependencies to get it work you would want to use the installer sh script I provide. Other than this, It’s remarkably very simple and there are not lots of dependencies, just swiftly python-dev and pip. It’s still really quite limited at the moment, but like all my code it is really proof of concept as to what can be done with Rackspace Cloud, and the answer is really quite a lot!

      I’m looking forward to documenting this more because it is rather cool. I’m working on something similar for driveclient, at the request of the enterprise folks and with the assistance of the Janman. I know less about Rackspace driveclient than most, but am convinced that a driveclient-cli project might remove a lot of the pain we are having. I have some more automation ideas that will be happy to share with you if your interested chap.

      Have a good one &

      Best wishes,

    • I added the documentation (although brief and incomplete) on the main github page for Rackspace obscene redundancy. I have plans to utilize Amazon S3 and try and put together some sort of provider ‘agnostic’ architecture, and maybe look at exploring GUI/interface on the web frontend, maybe give it a database backend as well to keep track of some important things…

      1. Redundancy/Deduping nth file
      2. Retention period days

      It’s going to be hard to match the efficiency of C as offered by driveclient but I am confident with some careful thought I can come up with a really nice front end, that has a little bit less confusing error messages to many of the competitors out there. I’m told I am ‘remaking jungle disk’ 😀 hehe

Leave a Reply

Your email address will not be published. Required fields are marked *