Static Sites With Nginx On Heroku

I have a few static sites on Heroku but in one case in particular I already had quite an involved nginx configuration - mainly 410s for some previous content and a series of redirects from older versions of the site. The common way of having static sites on Heroku appears to be to use a simple Rack middleware, but that would have meant reimplementing lots of boring redirect logic.

Heroku buildpacks are great. The newer cedar stack is no longer tied to a particular language or framework, instead all of the discovery and knowledge about particular software is put into a buildpack. As well as the Heroku provided list it’s possible to write you’re own. Or in this case use one someone has created earlier.

I’ve just moved Vagrantbox.es over to Heroku due to the closure of a previous service. In doing that, instead of the simple database backed app, I’ve simply thrown all the content onto GitHub. This means anyone can fork the content and send pull requests. Hopefully this should mean I pay a bit more attention to suggestions and new boxes.

The repository is a nice simple example of using the mentioned Heroku Nginx buildpack too. You just run the following command to create a new Heroku application.

heroku create --stack cedar --buildpack http://github.com/essh/heroku-buildpack-nginx.git

And then in typical Heroku fashion use a git remote to deploy changes and updates. The repository is split into a www folder with the site content and a conf folder with the nginx configuration. The only clever parts involve the use of an ERB template for the nginx configuration file so we can pickup the correct port. We also use 1 worker process and don’t automatically daemonize the process - Heroku deals with this itself.

Self Contained Jruby Web Applications

Several things seemed to come together at once to make me want to hack on this particular project. In no particular order:

The Thoughtworks Technology Radar said the following:

Embedding a servlet container, such as Jetty, inside a Java application has many advantages over running the application inside a container. Testing is relatively painless because of the simple startup, and the development environment is closer to production. Nasty surprises like mismatched versions of libraries or drivers are eliminated by not sharing across multiple applications. While you will have to manage and monitor multiple Java Virtual Machines in production using this model, we feel the advantages offered by the simplicity and isolation are significant.

I’ve been getting more interested in JRuby anyway, partly because we’re finding ourselves using both Ruby and Scala at work, and maintaining a single target platform makes sense to me. Throw in the potential for interop between those languages and it’s certainly worth investigating.

Play 2.0 shipped and currently only provides the ability to create a self contained executable with bundled web server. Creating WAR files for more traditional application servers is on the roadmap but interestingly wasn’t deemed essential for the big 2.0 release. I had a nice chat with Martyn Inglis at work about some of the nice side effects of this setup.

And throw in every time I have to configure straight Ruby applications for production environments I get cross. I know where all the bits and pieces are buried and can do it well, but with so many moving parts it’s absolutely no fun whatsoever.

Warbler, the JRuby tool for creating WAR files from Ruby source, has just added the ability to embed Jetty to the master branch.

I decided to take all of this for a quick spin, and the resulting code is up on GitHub.

This is the simplest Rack application possible, it just prints Hello Jetty. And the README covers how to install and run it so I won’t duplcate that information here.

But I will print some nearly meaningless and unscientific benchmarks because, hey, who doesn’t like those?

⚡ ab -c 50 -n 5000 http://localhost:8090/

Server Software:        Jetty(8.y.z-SNAPSHOT)
Server Hostname:        localhost
Server Port:            8090

Document Path:          /
Document Length:        16 bytes

Concurrency Level:      50
Time taken for tests:   1.827 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Total transferred:      555999 bytes
HTML transferred:       80144 bytes
Requests per second:    2736.47 [#/sec] (mean)
Time per request:       18.272 [ms] (mean)
Time per request:       0.365 [ms] (mean, across all concurrent requests)
Transfer rate:          297.16 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   2.2      1      18
Processing:     1   16   7.7     15      61
Waiting:        0   14   7.2     13      57
Total:          2   18   7.5     17      61

Percentage of the requests served within a certain time (ms)
  50%     17
  66%     19
  75%     21
  80%     22
  90%     27
  95%     30
  98%     42
  99%     52
 100%     61 (longest request)

Running the same test on the same machine but using Ruby 1.9.2-p290 and Thin gives.

Server Software:        thin
Server Hostname:        localhost
Server Port:            9292

Document Path:          /
Document Length:        16 bytes

Concurrency Level:      50
Time taken for tests:   3.125 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Total transferred:      620620 bytes
HTML transferred:       80080 bytes
Requests per second:    1600.16 [#/sec] (mean)
Time per request:       31.247 [ms] (mean)
Time per request:       0.625 [ms] (mean, across all concurrent requests)
Transfer rate:          193.96 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.3      0       9
Processing:     3   31   6.4     33      52
Waiting:        3   25   6.4     28      47
Total:          4   31   6.4     33      52

Percentage of the requests served within a certain time (ms)
  50%     33
  66%     34
  75%     34
  80%     35
  90%     38
  95%     41
  98%     46
  99%     50
 100%     52 (longest request)

2736 requests per second on JRuby/Jetty vs 1600 on Ruby/Thin. As noted this isn’t meaningfully useful, in that it’s a hello world example and I’ve not tried to pick the fastest stacks on either side. I’m more bothered about it not being slower, because the main reason to pursue this approach is simplicity. Having a single self contained artefact that contains all it’s dependencies including a production web server is very appealing.

I’m hoping to give this a go with some less trivial applications, and probably more importantly look to compare a production stack based around these self-contained executables vs the dependency chain that is modern Ruby application stacks.

Thanks to Nick Sieger for both writing Warbler and for helping with a few questions on the JRuby mailing list and on Twitter. Thanks also to James Abley for a few pointers on Java system properties.

Recent Projects And Talks

I’ve been pretty busy with all things GOV.UK recently but I’ve managed to get a few bits of unrelated code up and a few talks in. I’m still pretty busy so here’s a list of some of them rather than a proper blog post.

  • Puppet Data Mining talk from last weeks PuppetCamp in Edinburgh.
  • Introducting Web Operations talk I gave at work to give my mainly non-development colleagues an idea about what it’s all about.
  • Learning from building GOV.UK talk I gave a month back or so to Cambridge Geek Night. We did an excellent full project retrospective after the beta launch and this lists some of the things we learnt.

After someone bugged me on Twitter I realised the small bit of code we’ve been using for our Nagios dashboard wasn’t out in the wild. So introducing Nash, a very simple high level check dashboard which screenscrapes nagiosand runs happily on Heroku.

Although I’ve not been writing too much on here I’ve been keeping Devops Weekly going each week for over a year now. I’ve just crossed 3000 subscribers which is pretty neat for a pet project.

Dashboards At Gov.Uk

This is a bit of a cheat blog post really. I’ve been crazy busy all month with little time for anything except work (specifically shipping the first release of www.gov.uk). I have had a little time to blog over on the Cabinet Office blog though, about work we’ve done with dashboards.

http://digital.cabinetoffice.gov.uk/2012/02/08/radiating-information/

If you’re ever looking for good little hack projects dashboards are perfect, and often hugely useful once up and running. Convincing people of this before you have a few in the office might be hard - so just build something simple in a lunch break and find a screen to put it on. We’ve had great feedback from ours, both from people wandering through the office and from our colleagues who have a better idea of what’s going on.

What's Jekyll?

Jekyll is a static site generator, an open-source tool for creating simple yet powerful websites of all shapes and sizes. From the project’s readme:

Jekyll is a simple, blog aware, static site generator. It takes a template directory […] and spits out a complete, static website suitable for serving with Apache or your favorite web server. This is also the engine behind GitHub Pages, which you can use to host your project’s page or blog right here from GitHub.

It’s an immensely useful tool and one we encourage you to use here with Hyde.

Find out more by visiting the project on GitHub.

Talking To Jenkins From Campfire With Hubot

In what turned out to be a productive holiday hacking with languages I’d not used before, I got round to writing some coffeescript on node.js. This was more to do with scratching a personal itch that pure experimentation. I had a play with Janky (Github’s Jenkins/Hubot mashup) but found it a little opinionated on the Jenkins side, but the campfire integration is excellent. Looking at the Jenkins commands in hubot-scripts though I found those even more opinionated.

The magic of open source though is you can just fix things, then ask nice people if they like what you’ve done. I set about writing a few more general commands and lo, the’ve been quickly merged upstream.

These add:

  • A command to list all your Jenkins jobs and the current state
  • A command to trigger a normal build
  • A command to trigger a build with a list of parameters

campfire window showing jenkins tasks

This was made much easier by first looking at the previous Jenkins commands, and then looking at other scripts in the hubot-scripts repository. The best way of learning a new language/framework is still on the shoulders of others.

I’ve got a few other good ideas for Jenkins related commands. I want to add a filter command to the jobs list, both by name and by current state. For longer running jobs I also want to report whether a build is currently running. And then maybe get information about a specific job, like the last few runs or similar. Any other requests or ideas most welcome.

EC2 Tasks For Fabric

For running ad-hoc commands across a small number of servers you really can’t beat Fabric. It requires nothing other than ssh installed on the servers, is generally just a one-line install and requires next to no syntaxtic fluff between you and the commands you want running. It’s much more of a swiss army knife to Capistranos bread knife.

I’ve found myself doing more and more EC2 work of late and have finally gotten around to making my life easier when using Fabric with Amazon instances. The result of a bit of hacking is Cloth (also available on PyPi). It contains some utility functions and a few handy tasks for loading host details from the EC2 API and using them in your Fabric tasks. No more static lists of host names that constantly need updating in your fabfile.

Specifically, with a fabfile that looks like:

#! /usr/bin/env python
from cloth.tasks import *

You can run:

fab all list

And get something like:

instance-name-1 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-2 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-3 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-4 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-5 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-6 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-7 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-8 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
...

And then you could run:

fab -P all uptime

And you’d happily get the load averages and uptime for all your EC2 instances.

A few more tricks are documented in the GitHub README, including filtering the list by a regex and some convention based mapping to Fabric roles. I’ll hopefully add a few more features as I need them and generally tidy up a few things but I’m pretty happy with it so far.

First Experience Building Something With Clojure

I nearly always try and grab some time over Christmas to try something new and this year I’d been planning on spending some time with Clojure. I have several friends who are big fans, but dipping in and out of a book hadn’t really worked. What I needed was an itch to scratch.

I stuck with a domain I’m pretty familiar with for this first project, namely building a little web application. It renders a web page, makes HTTP requests, parses JSON into native data structures and does a bit of data juggling. Nothing fancy or overly ambitious, I was mainly interested in picking up the syntax, understanding common libraries and getting something built. Here’s what I’ve got:

Dasboard for Jenkins builds

Jenkins has various API endpoints, but most dashboards I’ve seen concentrate on showing you the current status of all the builds. This is hugely useful when it comes to the simple case of continuous integration, but I’ve also been using Jenkins for other automation tasks, and making extensive use of parameterized builds. What the dashboard above concentrates on is showing recent builds for a specific job, along with the parameters used to run them.

Overall it was a fun project. Clojure made much more sense to me building this application than it had from simple examples. The Noir web framework is excellent and proved easy enough to jump into and simple enough that I could read the source code if I was interested in how something worked. The Leiningen build tool made getting started, downloading and managing dependencies and running tests and the application itself easy.

What I didn’t find particularly appealing was the lack of a strong standard library coupled with the difficulty of tracking down suitable libraries. JSON parsing, dealing with HTTP requests and date handing are very common activities in web programming and all needed me to jump around looking at the best way of dealing with the common case. I settled on clj-http, chesire and using Java interop for date formatting. clj-http suffered from having lots of forks on GitHub to navigate through. I started with clojure-json before discovering it had been deprecated. And neither clj-time or date-clj appeared to support unix timestamps as far as I could tell from the source. Throw in some uncertainty over the status of clojure-contrib that isn’t well documented on the official site and it needs some effort to get started.

The working code for this is already up on GitHub and I’d be interested in any Clojure experts showing me the error of my ways.

Setting Puppet Class Using Environment Variables

I’m not sure how novel this approach is but a few folks at work hadn’t seen it before so I thought it worth jotting down.

If you have even a small but dynamic set of servers then a problem arises with how those nodes are defined in puppet. A node remember is defined in puppet like so:

node web3.example.com {
  include web_server
}

The problem is twofold. If you have a growing infrastructure, that list of nodes is going to get quickly out of hand. The other problem is around provisioning new hosts, the obvious approach to which is something like:

  1. Summon new EC2 instance
  2. Change the node definition to include the new hostname
  3. Install puppet on instance and so the ssl certificate signing dance
  4. Run puppet

Step 2 stands out. The others are easily automated, but do you want to automate a change to your puppet manifests and a redeploy to the puppetmaster for a new instance? Probably not. Puppet has the concept of an external node classifier which can be used to solve this problem, but another simpler approach is to use an environment variable on the new machine.

Lets say we define our nodes something like this instead:

{% codeblock %}node default { case $machine_role { frontend: { include web_server } backend: { include app_server } data: { include db_server } monitoring: { include monitoring_server } development: { include development } default: { include base } } }{% endcodeblock %}

If a machine runs and sets the $machine_role variable to frontend it includes the web_server class, if that variable equals ‘data’ it’s going to include the db_server class instead. Much cleaner and more maintainable in my view. Now to set that variable.

Facter is the tool used by Puppet to get system information like the operating system or processor count. You can use these facter provided variables anywhere in your manifests. And one way of adding a new fact is via an environment variable on the client. Any environment variable prefixed with FACTER_ will be available in Puppet manifests. So in this case we can:

export FACTER_machine_role=frontend

So our steps from above become something like:

  1. Summon new machine
  2. echo “export FACTER_machine_role=backend” >> /etc/environment
  3. Install puppet on instance and so the ssl certificate signing dance
  4. Run puppet

Much easier to automate. And if you’re looking at a box and want to know what it’s role is you can check the relevant environment variable.

Jenkins Parameterized Builds

I’m a huge Jenkins fan now, but that wasn’t always the case. I started (and still have a soft spot for) Cruise Control, mainly building .NET and PHP applications. I then jumped to much simpler projects like Integrity mainly for Python and Ruby projects. I reasoned I didn’t need the complexity of Cruise or Hudson, I just wanted to be able to run my tests on a remote machine and have something go green or red. I then worked out that wasn’t quite the case, and ended up committing webhook like functionality to Integrity so I could chain builds together. And then I eventually tried Jenkins and found it’s power mixed with flexibility won me over. That’s really all just context, but hopefully explains a little about why I like a few Jenkins features in particular, one of which is Parameterized builds.

The Jenkins wiki describes this by saying:

Sometimes, it is useful/necessary to have your builds take several “parameters.”

But then goes onto a usecase that probably won’t mean much to dynamic language folk. This is one failing of much of the documentation around Jenkins, it often feels geared towards developers of certain languages when in reality the tool is useful everywhere. The important take away here is that builds can take arguments, which can have default values. Here’s an example:

Imagine we have a build which runs a set of simple tests against a live system . And further imagine that said system is composed of a number of different web services. Oh, and we’re running a few different parrallel versions of the entire system for testing and staging purposes. We could have one Jenkins job per application/environment combination. Or we could have one parameterized build.

Lets first specify our parameters from the Configure build screen of our Job.

Here we’re specifying a TARGET_APPLICATION and TARGET_PLATFORM parameter. These are going to turn into environment variables we can use in our build steps. We can specify default values for these if we like too. I’m just using strings here, but I could also use a select box or file dialog, or other options provided by various plugins.

Now when we hit the build button, instead of the build just starting, we’re propted for these values.

So with our new build if we want it to run against the staging environment and just for the foobar application we enter those values and hit build. That on it’s own can be used to drastically cut down on the number of individual builds you have to manage in Jenkins. And we’re not just restricted to text inputs, we can use boolean values or even prompt for file uploads at build time. But throw in a few plugins and things get even more interesting.

Jenkins has an overwhelming number of plugin available. If you haven’t spent the good half hour it takes to go down the list I’d highly recommend it. One of Jenkins best features is the ability to trigger a build after the successful run of another job. It allows you to chain things like test runs to integration deployments to smoke tests to production deploys. Basically your modern continuous deployment/delivery pipeline. The Build Pipeline plugin is excellent for visuallising this and introducing human gates if needed. Another useful plugin in this context is the Parameterized Trigger plugin. A limitation of Jenkins is that downstream builds can’t pass parameters, but this plugin works around that. Instead of ticking the Build other projects option you go for the Trigger parameterized build on other projects box. This allows you to select the project and to specify parameters to pass. This could be hard coded values, paramaters already passed into the pipeline, or things from other plugins like the git sha1 hash or subversion version number.

Combine all this together and it’s simple to have a per project continuous integration build running a test suite, kicking off a standard set of downsteam jobs for deploying to a test environment (by passing th relevant parameters), running some basic smoke tests and allowing someone to click a button to deploy to production. Or going the whole continuous deployment, I trust my test suite route, and deploying automatically. All within Jenkins. Getting this working requires a bit of planning. You want all of your projects to be deployed the same way but you probably want this to be the case anyway.

Providing flexible push button builds/deploys and reducing the number of nearly identical jobs in Jenkins are just two advantages to using parameterized builds. Most of the tricks come from thinking about Jenkins as much more than a continuous integration tool and more of an automation framework - I know at least one large organisation who have pretty much replaced cron for many tasks with Jenkins for instance. Running tests automatically, and in a central environment as close to production as possible, is important. But it’s just a sanity check if you’re doing everything right already. Centralising activity on a build pipeline requires you to be doing all that anyway, but in my opinion gives way more useful and rapid feedback about the state of the code your team is writing.