I’m keep meaning to get around to writing about why I think the future of web developers is operations but in lieu of a proper post here’s a list of things I’ve been spending my work life getting to know this month:
Puppet - It’s brilliant. Define (with a Ruby DSL of course) what software and services you want running on all your machines, install a daemon on each of them, and hey presto central configuration management.
VMWare vsphere - puppet makes more sense the more boxes you have. With vsphere I can have as many boxes as I want (nearly). Command line scripts and an actually very nice windows gui for settings up virtual machines is all pretty nice, especially running on some meaty hardware.
Nagios - With lots of boxes comes lots of responsibility (or something). Nagios might look a bit ugly, and bug me with it’s needless frames based admin, but I can see what people see in it. Which frankly is the ability to monitor everything everywhere for any change what-so-ever.
Solr - I’m now also pretty well versed in using Solr. I’ve used it in the past, but always behind a Ruby or Python library. Now I know my way around all the XML based configuration inards. Heck, I’m even running a nighly release from a couple of days ago in a production environment because I wanted a cool new feature. A special mention to the Solr community on the mailing list, twitter and irc for being great when I had questions.
Solaris - I nearly forgot, I spend more time than I care to remember working out how to use Open Solaris (conclusion: OK, but not Debian) and eventually Solaris 10 (conclusion: hope I don’t have to do that again). My installation notes read like some hideous hack but everything works fine in production and it’s scarily repeatable so I’ll live with it for now.
Thanks to Brad I’ve just released a new version of Django Test Extensions (also on GitHub with support for running tests without the overhead of setting up and tearing down the database. Django still has a few places were it assumes you’ll have a database somewhere in your project - and the default test runner is one of them.
On the first day at Barcamp Brighton this year I did a brief talk about getting started with automating deployment. I kept it nice and simple and didn’t focus on any specific technology or tool - just the general principles and pitfalls of doing anything manually. You can see the “slides on Slideshare”:
As part of the presentation I even did a live demo and promised I’d upload the code I used. The following is an incredibly simple fabric file that has most the basic set of tasks. Fabric is a python tool similar to capistrano in Ruby. I don’t really care whether you’re using make, ant, rake, capistrano or just plain shell scripts. Getting from not automating things to automating deployments is the important part - and it’s easier that you think.
The other part of the code example was a very basic apache virtualhost so just in case anyone needed that as well here it is:
Allow from all
CustomLog /var/log/apache2/sample/access.log combined
DJUGL is back, the monthly Django meetup in London. I think the last few times have been as much about useful Python stuff as just using Django, and this time it’s officially a bit more broad ranging. If you’re in or around London on the 24th September then come along.
You can get more information on Twitter or by following Rob. But expect a few short talks, some interesting conversations and maybe some beer with other like minded developers.
I’m going to be talking about automating deployment of Python web applications. If you follow me on Twitter you’ll have heard me rambling a little about some of what I’ve been up to, and some of the posts here give an insight into what I’ve been working on. But the short version is that several friends mentioned how difficult it could be to get a working Django application from a local machine to a production web server. And I though I better get down in script form my experiences of Django, WSGI applications and web server setup to make things easier.
I think this situation is partially caused by the success of the Django development server, and partially by people coming from a PHP background. In my PHP days I think I always wanted to know how Apache did it’s think, so long ago jumped into anything and everything in httpd.conf from loading modules to virtual hosts. But not everyone does the same, and PHP does make simple deployments easy enough that you might get away without doing so. Rails went through the same problems and seems to be coming out the other side. I’m hoping that Django and Python is soon to be in the same position, where basic deployment is just a given.
Now I’m generally an Ubuntu guy, but I’ve just had the need to setup some boxes running Solaris for Django and a handful of WSGI applications. I know my way around Ubuntu pretty well. I know all the packages I need to install and in what order. Hell, I even have all that scripted so I can just run a command and it works by magic. I’ll script the following steps if I can do when I get round to it but here, in one list, are the installation instructions for Apache, mod_wsgi, Mysql, MySQLdb, setuptools and memcached that worked for me on the latest version of Open Solaris (2009.06 at the time of writing).
First up I needed to install Apache and start the service running.
This installs the library files into /usr/mysql/5.0/lib and Python doesn’t know were to find them. The above command links them into the more standard /usr/lib folder were Python will pick it up nicely.
I tend to use mod_wsgi for serving Python apps behind Apache, however a mod_wsgi package isn’t part of the default package list. It is however available in the pending list so first you need to add that list of packages.
This installs the module but you then need to tell Apache to load it. Add the following line to /etc/apache2/2.2/conf.d/modules-32.load or /etc/apache2/2.2/conf.d/modules-64.load depending on your architecture.
LoadModule wsgi_module libexec/mod_wsgi.so
To get Apache to load that module you need to restart it like so:
svcadm restart http:apache22
I use Pip for installing Python code, but tend to install setuptools to make installing Pip easier. I don’t know if an up to date Pip package exists.
pfexec pkg install python-setuptools
This should leave you with easy_install on your path so installing Pip, then virtualenv should be a breeze.
As an added bonus I also installed memcached for some snappy caching.
pfexec pkg install SUNWmemcached
This won’t start up by default and needs a little configuration. The first command will launch you into a prompt where you can type the rest of the commands.
Once you’d done that you should be able to start memcache on the standard port.
svcadm refresh memcached
svcadm enable memcached
Et voila. The internet helped massively on my quest to track down this information. Not all of the following links turned out to work for me but all of them led me in the right direction. Thanks everyone.
I’m not a Solaris admin. I’m not really a sysadmin at all, I just end up pretending to be one of late. Any experienced Solaris people with experience of these tools reading this I’d be grateful for any hints and tips. Hopefully this saves a few people from the head scratching I’ve been doing for the last few days.
So one of the problems with using pip or easy_install as part of an automated deployment process is they rely on an internet connection. More than that, they rely on PyPi being up as it’s a centralised system, unlike all the apt package mirrors.
The best solution seems to be to host your own PyPi compliant server. Not only can you load all the third party modules you use onto it, but you could also upload any internal applications or libraries that you like. By running this on your local network you ensure your not dependent on pypi or an internet connection.
At the moment I’m playing with Chishop which is a django application for maintaining a PyPi compatible server. Another alternative if that doesn’t work out is EggBasket
To install from your own PyPi server you can specify the location of your Chishop instance with the -i flag.
This will fall back to the PyPi server if it doesn’t find the relevant package. If you want to stop that behaviour and make sure you have a local package then you can limit the hosts with the -H flag like so.
I’ve been playing with automating Django deployments again, this time using Fabric. I found a number of examples on the web but non of them quite fit the bill for me. I don’t like serving directly from a repository, I like to have either a package or tar I can use to say “that is what went to the server”. I also like having a quick rollback command as well as being able to deploy a particular version of the code when the need arises. I also wanted to go from a clean ubuntu install (plus SSH) to a running Django application in one command from the local development machine. The Apache side of things is nicely documented in this Gist which made a good starting point.
I’m still missing a few things in this setup mind and at the moment you still have to setup your local machine yourself. I’m probably going to create a paster template and another fabfile to do that I think. The instructions are a little rough as well at the moment and I’ve left the database out of it as everyone has there own preference.
This particular fabric file makes setting up and deploying a django application much easier, but it does make a few assumptions. Namely that you’re using Git, Apache and mod_wsgi and your using Debian or Ubuntu. Also you should have Django installed on your local machine and SSH installed on both the local machine and any servers you want to deploy to.
note that I’ve used the name project_name throughout this example. Replace this with whatever your project is called.
First step is to create your project locally:
pre. mkdir project_name
django-admin.py startproject project_name
Now add a requirements file so pip knows to install Django. You’ll probably add other required modules in here later. Creat a file called requirements.txt and save it at the top level with the following contents:
Then save this fabfile.py file in the top level directory which should give you:
You’ll need a WSGI file called project_name.wsgi, where project_name is the name you gave to your django project. It will probably look like the following, depending on your specific paths and the location of your settings module
pre. import os
put the Django project on sys.path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(file), “../”)))
os.environ[“DJANGO_SETTINGS_MODULE”] = “project_name.settings”
from django.core.handlers.wsgi import WSGIHandler
application = WSGIHandler()
Last but not least you’ll want a virtualhost file for apache which looks something like the following. Save this as project_name in the inner directory. You’ll want to change /path/to/project_name/ to the location on the remote server you intent to deploy to.
WSGIDaemonProcess project_name-production user=project_name group=project_name threads=10 python-path=/path/to/project_name/lib/python2.6/site-packages
WSGIScriptAlias / /path/to/project_name/releases/current/project_name/project_name.wsgi
Allow from all
CustomLog /var/log/apache2/access.log combined
Now create a file called .gitignore, containing the following. This prevents the compiled python code being included in the repository and the archive we use for deployment.
You should now be ready to initialise a git repository in the top level project_name directory.
In reality you might prefer to keep your wsgi files and virtual host files elsewhere. The fabfile has a variable (config.virtualhost_path) for this case. You’ll also want to set the hosts that you intend to deploy to (config.hosts) as well as the user (config.user).
The first task we’re interested in is called setup. It installs all the required software on the remote machine, then deploys your code and restarts the webserver.
pre. fab local setup
After you’ve made a few changes and commit them to the master Git branch you can run to deply the changes.
pre. fab local deploy
If something is wrong then you can rollback to the previous version.
pre. fab local rollback
Note that this only allows you to rollback to the release immediately before the latest one. If you want to pick a arbitrary release then you can use the following, where 20090727170527 is a timestamp for an existing release.
pre. fab local deploy_version:20090727170527
If you want to ensure your tests run before you make a deployment then you can do the following.
pre. fab local test deploy
The actual fabfile looks like this. I’ve uploaded a Gist of it, along with the docs, so if you want to improve it please clone it.
“Use the local virtual server”
config.hosts = [‘172.16.142.130’]
config.path = ‘/path/to/project_name’
config.user = ‘garethr’
config.virtualhost_path = “/”
“Run the test suite and bail out if it fails”
local(“cd $(project_name); python manage.py test”, fail=“abort”)
Setup a fresh virtualenv as well as a few useful directories, then run
a full deployment
sudo(‘aptitude install -y python-setuptools’)
sudo(‘pip install virtualenv’)
sudo(‘aptitude install -y apache2’)
sudo(‘aptitude install -y libapache2-mod-wsgi’)
# we want rid of the defult apache config
sudo(‘cd /etc/apache2/sites-available/; a2dissite default;’)
run(‘mkdir -p $(path); cd $(path); virtualenv .;’)
run(‘cd $(path); mkdir releases; mkdir shared; mkdir packages;’, fail=‘ignore’)
Deploy the latest version of the site to the servers, install any
required third party modules, install the virtual host and
then restart the webserver
config.release = time.strftime(‘%Y%m%d%H%M%S’)
“Specify a specific version to be made live”
config.version = version
run(‘cd $(path); rm releases/previous; mv releases/current releases/previous;’)
run(‘cd $(path); ln -s $(version) releases/current’)
Limited rollback capability. Simple loads the previously current
version of the code. Rolling back again will swap between the two.
run(‘cd $(path); mv releases/current releases/_previous;‘)
run(‘cd $(path); mv releases/previous releases/current;’)
run(‘cd $(path); mv releases/_previous releases/previous;‘)
Helpers. These are called by other functions rather than directly
require(‘release’, provided_by=[deploy, setup])
“Create an archive from the current Git master branch and upload it”
local(‘git archive –format=tar master | gzip > $(release).tar.gz’)
run(‘cd $(path)/releases/$(release) && tar zxf ../../packages/$(release).tar.gz’)
“Add the virtualhost file to apache”
require(‘release’, provided_by=[deploy, setup])
sudo(‘cd $(path)/releases/$(release); cp $(project_name)$(virtualhost_path)$(project_name) /etc/apache2/sites-available/‘)
sudo(‘cd /etc/apache2/sites-available/; a2ensite $(project_name)‘)
“Install the required packages from the requirements file using pip”
require(‘release’, provided_by=[deploy, setup])
run(‘cd $(path); pip install -E . -r ./releases/$(release)/requirements.txt’)
“Symlink our current release”
require(‘release’, provided_by=[deploy, setup])
run(‘cd $(path); rm releases/previous; mv releases/current releases/previous;’, fail=‘ignore’)
run(‘cd $(path); ln -s $(release) releases/current’)
“Update the database”
run(‘cd $(path)/releases/current/$(project_name); ../../../bin/python manage.py syncdb –noinput’)
“Restart the web server”
Django now has much better support for conditional view processing using the standard ETag and Last-Modified HTTP headers. This means you can now easily short-circuit view processing by testing less-expensive conditions. For many views this can lead to a serious improvement in speed and reduction in bandwidth.
A nice set of decorators for dealing with ETags and Last-Modified headers. Again very simple to use and set up, and a simple way of squeezing a little more performance out of you application.
The basic workflow of Django’s admin is, in a nutshell, “select an object, then change it.” This works well for a majority of use cases. However, if you need to make the same change to many objects at once, this workflow can be quite tedious. In these cases, Django’s admin lets you write and register “actions” – simple functions that get called with a list of objects selected on the change list page.
Anything that makes the admin a little more powerful and a little more flexible is a good idea in my book. Admin actions allow you to run code over multiple objects at once, simple select them with a checkbox then select an action to run. This is worth it for the delete action alone, but you can write your own actions simply enough as well (for instance for approving a batch of comments, or archiving a set or articles.)
You can now make fields editable on the admin list views via the new list_editable admin option. These fields will show up as form widgets on the list pages, and can be edited and saved in bulk.
Another time saving admin addition, this time for making some fields editable from the change list rather than the object view. For quick changes, especially to boolean fields, I think this again is a nice addition.
You can now control whether or not Django creates database tables for a model using the managed model option. This defaults to True, meaning that Django will create the appropriate database tables in syncdb and remove them as part of reset command. That is, Django manages the database table’s lifecycle. If you set this to False, however, no database table creating or deletion will be automatically performed for this model. This is useful if the model represents an existing table or a database view that has been created by some other means.
I particularly like this addition. One of the issues I had with Django was some of the built in assumptions, in particular that you’d be using a SQL database backend. Using unmanaged models looks like a great approach to using an alternative database like couchdb, tokyotyrant or mongodb or representing a webservice interface in your application.
I’m sure I’ll have missed a few other interesting changes or additions. Anyone else have a favourite?
Asteroid is a simple web interface for running scripts and recording the results. It’s like a much simpler and more general purpose version of something like Cruise Control. You can get the code on Github.
I built it to solve two main problems:
It’s sometimes useful to have a historical record of a scripts execution, in particular whether it passed or failed and what the output was. Just running a command line script probably doesn’t give you that. It’s also useful to have a more graphical interface for those members of the team who don’t use the command line.
When working in a team you often want to run scripts against shared infrastructure, for instance deploying a testing release or running a test suite. Seeing what is running at present helps with that.
So it should be useful for running deployments, running test suites, running backups, etc. It currently doesn’t have scheduling or similar build in, but as everything is triggered by hitting a URL it would be simple enough to use cron for something like that. It should also be useful whatever language you write your scripts in; rake, ant, shell scripts, etc. At the end of the day it just executes a command at the console.
Asteroid uses the Django Python framework under the hood.
You’ll also need a database. The default in the shipped settings is to use sqlite but this should work with any database supported by Django.
You’ll also need a decent web browser. I’ve gone and used HTML5 as an experiment and with this being a developer tool I’m hoping to stick with it. It would be easy enough to convert the templates if this is a problem however.
The application has an optional message queue backend which can be enabled in the settings file. This is used to improve the responsiveness of the application as well as allow commands to be executed on a remote machine, rather than on the box Asteroid is running.
If you’re using the message queue backend you’ll need to run the listener script in order to get your commands executed. At the moment that means modifying a constant in the listener script to point at a running message queue instance at asteroid/bin/asteroid_listen.py.
Once you’re up and running you should be able to add commands via the admin interface at http://localhost:8000/admin/. The username and password should be those you added when creating the database via the syncdb command above.
The development configs include a few additional applications (mentioned above) which I use for testing and debugging. You can run the test suite like so:
manage.py test --coverage
This is an early release that just about works for me. I can already see a number of areas I’d like to clean up a little or extend. For instance:
Other deployment options, including a WSGI file and a spawning startup script.
Use a database migration system to make upgrades easier.
Make the message queue listener script more robust.
Make the command entry more robust, it sometimes takes a bit of fiddling with to get something to run correctly.
Formalise running scripts on remote machines, including support for running on multiple machines.
Paging for long lists of commands or runs.
I’m pretty happy with how it’s shaping up so far. Under the hood it works by having the web app put a JSON document on the message queue. The JSON contains the command to be run and a callback URL. The script listening to the message queue picks up the message, runs the command, and posts a JSON document back to the webhook url. It keeps the web interface snappy, as well as meaning it can show which commands are currently in progress at any given time. It also has the side benefit of meaning you can execute commands on a remote machine, as the listener doesn’t care where it’s running.
As noted above I have a few ideas of where I want to take it, but I’m going to try using it for a bit and see how that goes. If anyone else finds it useful then do let me know.
A spreadsheet. A CSV file. Whatever is in use internally. Made available to people like us under a suitable license.
I feel a little self adsorbed quoting myself (from a recent Refresh Cambridge discussion) but I did like the turn of phrase. What I was rambling on about was Cambridge County mapping data, after a question from a nice chap from the council about what “new, exciting map technology” we’d like to see. But it applies to any data that you’re trying to make public what-so-ever, be it government or otherwise.
What myself and a few other people were talking about, and one of the things that has been discussed as part of the Rewired State group, is that it’s all about the data, not necessarily about a nice web based API.
Now I’ve written and spoken about the need for well designed API’s being treated as part of the user interface. But remember interface design, and by association API design, isn’t easy. API design is often about building manageable flexibility. A public API is often about managing the flow of data you control out to third parties, as well as the information itself it might include limitations on usage, or request rate, or storage. A public API codifies how that information can be accessed. APIs also have to tread a fine line between making it easy for you to solve your problem, and making it easy for everyone else to solve their completely different problems. These compromises are design.
But not everything needs an API. Sometimes it’s just about the data, and the best way of getting at that data is as raw as possible. Government data is an easy sell here, as it is (or rather should be) our data. It’s also for the most part interesting to read rather than write (historical council tax data, or population data for instance). Raw data can generally be provided quicker than via an API. It doesn’t need fragile computer systems or extensive manual labour. It doesn’t need particularly clever computing resources. Just upload a spreadsheet or a CSV file to a sensible URL on a known, regular basis and away we go.
And giving data like this away to the development community is likely to have a few additional benefits if that data is useful (it probably is to someone). We’ll happily write software libraries, or create APIs over the top of it for you. We’ll also write all sorts of useful tools using the data in ways no one else thought of. So if you’re sat on a load of data that’s not core to your business, or is meant to be public anyway, then lets start talking publicly about how to just get this out on the web quickly and cheaply, rather than spending lots of your time and money on something fancy.