Talking To Jenkins From Campfire With Hubot

In what turned out to be a productive holiday hacking with languages I’d not used before, I got round to writing some coffeescript on node.js. This was more to do with scratching a personal itch that pure experimentation. I had a play with Janky (Github’s Jenkins/Hubot mashup) but found it a little opinionated on the Jenkins side, but the campfire integration is excellent. Looking at the Jenkins commands in hubot-scripts though I found those even more opinionated.

The magic of open source though is you can just fix things, then ask nice people if they like what you’ve done. I set about writing a few more general commands and lo, the’ve been quickly merged upstream.

These add:

  • A command to list all your Jenkins jobs and the current state
  • A command to trigger a normal build
  • A command to trigger a build with a list of parameters

campfire window showing jenkins tasks

This was made much easier by first looking at the previous Jenkins commands, and then looking at other scripts in the hubot-scripts repository. The best way of learning a new language/framework is still on the shoulders of others.

I’ve got a few other good ideas for Jenkins related commands. I want to add a filter command to the jobs list, both by name and by current state. For longer running jobs I also want to report whether a build is currently running. And then maybe get information about a specific job, like the last few runs or similar. Any other requests or ideas most welcome.

EC2 Tasks For Fabric

For running ad-hoc commands across a small number of servers you really can’t beat Fabric. It requires nothing other than ssh installed on the servers, is generally just a one-line install and requires next to no syntaxtic fluff between you and the commands you want running. It’s much more of a swiss army knife to Capistranos bread knife.

I’ve found myself doing more and more EC2 work of late and have finally gotten around to making my life easier when using Fabric with Amazon instances. The result of a bit of hacking is Cloth (also available on PyPi). It contains some utility functions and a few handy tasks for loading host details from the EC2 API and using them in your Fabric tasks. No more static lists of host names that constantly need updating in your fabfile.

Specifically, with a fabfile that looks like:

#! /usr/bin/env python
from cloth.tasks import *

You can run:

fab all list

And get something like:

instance-name-1 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-2 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-3 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-4 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-5 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-6 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-7 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
instance-name-8 (xx.xxx.xxx.xx, xxx.xx.xx.xx)
...

And then you could run:

fab -P all uptime

And you’d happily get the load averages and uptime for all your EC2 instances.

A few more tricks are documented in the GitHub README, including filtering the list by a regex and some convention based mapping to Fabric roles. I’ll hopefully add a few more features as I need them and generally tidy up a few things but I’m pretty happy with it so far.

First Experience Building Something With Clojure

I nearly always try and grab some time over Christmas to try something new and this year I’d been planning on spending some time with Clojure. I have several friends who are big fans, but dipping in and out of a book hadn’t really worked. What I needed was an itch to scratch.

I stuck with a domain I’m pretty familiar with for this first project, namely building a little web application. It renders a web page, makes HTTP requests, parses JSON into native data structures and does a bit of data juggling. Nothing fancy or overly ambitious, I was mainly interested in picking up the syntax, understanding common libraries and getting something built. Here’s what I’ve got:

Dasboard for Jenkins builds

Jenkins has various API endpoints, but most dashboards I’ve seen concentrate on showing you the current status of all the builds. This is hugely useful when it comes to the simple case of continuous integration, but I’ve also been using Jenkins for other automation tasks, and making extensive use of parameterized builds. What the dashboard above concentrates on is showing recent builds for a specific job, along with the parameters used to run them.

Overall it was a fun project. Clojure made much more sense to me building this application than it had from simple examples. The Noir web framework is excellent and proved easy enough to jump into and simple enough that I could read the source code if I was interested in how something worked. The Leiningen build tool made getting started, downloading and managing dependencies and running tests and the application itself easy.

What I didn’t find particularly appealing was the lack of a strong standard library coupled with the difficulty of tracking down suitable libraries. JSON parsing, dealing with HTTP requests and date handing are very common activities in web programming and all needed me to jump around looking at the best way of dealing with the common case. I settled on clj-http, chesire and using Java interop for date formatting. clj-http suffered from having lots of forks on GitHub to navigate through. I started with clojure-json before discovering it had been deprecated. And neither clj-time or date-clj appeared to support unix timestamps as far as I could tell from the source. Throw in some uncertainty over the status of clojure-contrib that isn’t well documented on the official site and it needs some effort to get started.

The working code for this is already up on GitHub and I’d be interested in any Clojure experts showing me the error of my ways.

Setting Puppet Class Using Environment Variables

I’m not sure how novel this approach is but a few folks at work hadn’t seen it before so I thought it worth jotting down.

If you have even a small but dynamic set of servers then a problem arises with how those nodes are defined in puppet. A node remember is defined in puppet like so:

node web3.example.com {
  include web_server
}

The problem is twofold. If you have a growing infrastructure, that list of nodes is going to get quickly out of hand. The other problem is around provisioning new hosts, the obvious approach to which is something like:

  1. Summon new EC2 instance
  2. Change the node definition to include the new hostname
  3. Install puppet on instance and so the ssl certificate signing dance
  4. Run puppet

Step 2 stands out. The others are easily automated, but do you want to automate a change to your puppet manifests and a redeploy to the puppetmaster for a new instance? Probably not. Puppet has the concept of an external node classifier which can be used to solve this problem, but another simpler approach is to use an environment variable on the new machine.

Lets say we define our nodes something like this instead:

{% codeblock %}node default { case $machine_role { frontend: { include web_server } backend: { include app_server } data: { include db_server } monitoring: { include monitoring_server } development: { include development } default: { include base } } }{% endcodeblock %}

If a machine runs and sets the $machine_role variable to frontend it includes the web_server class, if that variable equals ‘data’ it’s going to include the db_server class instead. Much cleaner and more maintainable in my view. Now to set that variable.

Facter is the tool used by Puppet to get system information like the operating system or processor count. You can use these facter provided variables anywhere in your manifests. And one way of adding a new fact is via an environment variable on the client. Any environment variable prefixed with FACTER_ will be available in Puppet manifests. So in this case we can:

export FACTER_machine_role=frontend

So our steps from above become something like:

  1. Summon new machine
  2. echo “export FACTER_machine_role=backend” >> /etc/environment
  3. Install puppet on instance and so the ssl certificate signing dance
  4. Run puppet

Much easier to automate. And if you’re looking at a box and want to know what it’s role is you can check the relevant environment variable.

Jenkins Parameterized Builds

I’m a huge Jenkins fan now, but that wasn’t always the case. I started (and still have a soft spot for) Cruise Control, mainly building .NET and PHP applications. I then jumped to much simpler projects like Integrity mainly for Python and Ruby projects. I reasoned I didn’t need the complexity of Cruise or Hudson, I just wanted to be able to run my tests on a remote machine and have something go green or red. I then worked out that wasn’t quite the case, and ended up committing webhook like functionality to Integrity so I could chain builds together. And then I eventually tried Jenkins and found it’s power mixed with flexibility won me over. That’s really all just context, but hopefully explains a little about why I like a few Jenkins features in particular, one of which is Parameterized builds.

The Jenkins wiki describes this by saying:

Sometimes, it is useful/necessary to have your builds take several “parameters.”

But then goes onto a usecase that probably won’t mean much to dynamic language folk. This is one failing of much of the documentation around Jenkins, it often feels geared towards developers of certain languages when in reality the tool is useful everywhere. The important take away here is that builds can take arguments, which can have default values. Here’s an example:

Imagine we have a build which runs a set of simple tests against a live system . And further imagine that said system is composed of a number of different web services. Oh, and we’re running a few different parrallel versions of the entire system for testing and staging purposes. We could have one Jenkins job per application/environment combination. Or we could have one parameterized build.

Lets first specify our parameters from the Configure build screen of our Job.

Here we’re specifying a TARGET_APPLICATION and TARGET_PLATFORM parameter. These are going to turn into environment variables we can use in our build steps. We can specify default values for these if we like too. I’m just using strings here, but I could also use a select box or file dialog, or other options provided by various plugins.

Now when we hit the build button, instead of the build just starting, we’re propted for these values.

So with our new build if we want it to run against the staging environment and just for the foobar application we enter those values and hit build. That on it’s own can be used to drastically cut down on the number of individual builds you have to manage in Jenkins. And we’re not just restricted to text inputs, we can use boolean values or even prompt for file uploads at build time. But throw in a few plugins and things get even more interesting.

Jenkins has an overwhelming number of plugin available. If you haven’t spent the good half hour it takes to go down the list I’d highly recommend it. One of Jenkins best features is the ability to trigger a build after the successful run of another job. It allows you to chain things like test runs to integration deployments to smoke tests to production deploys. Basically your modern continuous deployment/delivery pipeline. The Build Pipeline plugin is excellent for visuallising this and introducing human gates if needed. Another useful plugin in this context is the Parameterized Trigger plugin. A limitation of Jenkins is that downstream builds can’t pass parameters, but this plugin works around that. Instead of ticking the Build other projects option you go for the Trigger parameterized build on other projects box. This allows you to select the project and to specify parameters to pass. This could be hard coded values, paramaters already passed into the pipeline, or things from other plugins like the git sha1 hash or subversion version number.

Combine all this together and it’s simple to have a per project continuous integration build running a test suite, kicking off a standard set of downsteam jobs for deploying to a test environment (by passing th relevant parameters), running some basic smoke tests and allowing someone to click a button to deploy to production. Or going the whole continuous deployment, I trust my test suite route, and deploying automatically. All within Jenkins. Getting this working requires a bit of planning. You want all of your projects to be deployed the same way but you probably want this to be the case anyway.

Providing flexible push button builds/deploys and reducing the number of nearly identical jobs in Jenkins are just two advantages to using parameterized builds. Most of the tricks come from thinking about Jenkins as much more than a continuous integration tool and more of an automation framework - I know at least one large organisation who have pretty much replaced cron for many tasks with Jenkins for instance. Running tests automatically, and in a central environment as close to production as possible, is important. But it’s just a sanity check if you’re doing everything right already. Centralising activity on a build pipeline requires you to be doing all that anyway, but in my opinion gives way more useful and rapid feedback about the state of the code your team is writing.

Exposing Puppet And Facter Information On The Web

I don’t appear to have been in a writing mood recently but I’ve been getting back into hacking on a couple of pet projects. The first fruits of this coding (mainly backwards and forwards on the train) I’ve just made available to anyone interested.

Web Facter is a gem which takes the output from Facter and exposes this as JSON over HTTP. In theory you could run this on a configurable port on each of your machines and have a URL you can hit to get information on uptime, networking setup, hostnames or anything else exposed by Facter. It comes with a simple built-in web server and optional http basic authentication if you’re not going to do this via a proxy. The JSON display should be both human and machine readable, and I have a few ideas for projects which needed this information.

The other project is very similar, and even has a similar name, Web Puppet. You can run this on your puppet master and it exposes the node information (currently including the facts and tags) again as JSON over HTTP. I’m still working on this to make it a little more usable. At the moment it just shows you all nodes and all information, if you’re working with a larger cluster this isn’t really sensible. Recent versions of Puppet do have an HTTP based API but it requires some hoops to be jumped through and I’m not quite sure from the docs it lets me do what I want (I have a specific usecase, of which more soon all being well).

Both projects have had me reading the source code of Puppet and Facter, which for the most part has been enjoyable and informative. Puppet in particular has some great comments lying around :) Both of the above projects are available as gems for anyone else to play around with and build on, but my main aim is a little more high level. All being well I’ll have a couple of projects built atop these APIs shortly.

Javascript In Your Ruby: Mongoid Map Reduce

We’re pretty fond of Mongodb at work and I’ve been getting an opportunity to kick some of the more interesting tyres recently. I thought I’d document something I found myself doing here, half hoping it might be useful for anyone else with a similar problem and also to see if anyone else has a much neater approach. The examples are obviously pretty trivial, but hopefully you get the idea.

So, we’re making using of the rather nice Mongoid Ruby library for defining our models as Ruby classes. Here’s a couple of very simple classes. Anyone familiar with DataMapper or Django’s ORM should be right at home here.

class Publication
  include Mongoid::Document

  field :name,            :type => String
  field :section,         :type => String
  field :body,            :type => String
  field :is_published,    :type => Boolean
end

class LongerPublication < Publication
  field :extra_body,      :type => String
end

So we now have a good few publications and longer publications in our system. And folks have been creating sections with wild amandon. What I’d like to do now is do some reporting, specifically I want to know the numbers of Publications by type and publication status. And lets allow a breakdown by section while we’re at it.

One approach to this is using Mongo’s built in map-reduce capability. Mongoid exposes this pretty cleanly in my view, by allowing you to write the required javascript functions (a mapper and a reducer) inline in the Ruby code. This might feel evil, but seems the best of the available options. I can see for much larger functions that splitting this out into separate javascript files for ease of testing might be nice, but were you can just test the input/output of the whole job this works for me.

KLASS = "this._type"
SECTION = "this.section"

def self.count_by(type)
  map = <<EOF
    function() {
      function truthy(value) {
        return (value == true) ? 1 : 0;
      }
      emit(#{type}, {type: #{type}, count: 1, published: truthy(this.is_published)})
    }
EOF

  reduce = <<EOF
    function(key, values) {
      var count = 0; published = 0;
      values.forEach(function(doc) {
        count += parseInt(doc.count);
        published += parseInt(doc.published);
        type = doc.type
      );
      return {type: type, count: count, published: published}
    }
EOF

  collection.mapreduce(map, reduce).find()

end

In our case that will return something like the following, or rather more specifically it will return a Mongo::Cursor that allows you to get at the following data.

[{"_id"=>"Publication", "value"=>{"type"=>"Publication", "count"=>42.0, "published"=>29.0}},
{"_id"=>"LongerPublication", "value"=>{"type"=>"LongerPublication", "count"=>12.0, "published"=>10.0}}]

I’ve been pretty impressed with both Mongo and Mongoid here. I like the feel of mapreduce jobs for this sort of reporting task. In particular it’s suprising how writing two languages mixed together like this doesn’t really affect the readability of the code in my view. Given that with a relational database you’d probably be writing SQL anyway maybe that’s not that suprising - the syntactic differences between Javascript and Ruby are much smaller than pretty much anything and SQL. Lots of folks have written about the increase of polyglot programming, but I wonder if we’ll see an increase in the embedding of one language in another?

Rundeck And Nagios Nrpe Checks

I’ve been playing with Rundeck recently. For those that haven’t seen it yet it’s an application for running commands across a cluster of machines and recording the results. It has both a command line client and a very rich web interface which boths allows you to trigger commands and shows the results.

I’ve played with a few different jobs so far, including triggering Puppet runs across machines triggered by a Jenkins plugin. I’ve also been looking at running all my monitoring tasks at the click of a button (or again as part of a smoke test triggered by Jenkins) and I thought that might make a nice simple example.

My checks are written as Nagios plugins, and run periodically by Nagios. I also trigger them manually, using Dean’s NRPE runner script.

Rundeck showing a job output

The above shows a successful run across a few machines I use for experimenting with tools. Hopefully you can see the summary of the run on each of the four machines, each ran five NRPE checks and all passed. On failure we’d see the results as well as different symbols and colous. We can easily save the output to a file if we need to, rerun or duplicate the job (maybe to have it run against a different group of machines) or we can export the job definition file to load into another instance.

The same job can also be run on the command line (which makes use the of Rundeck API)

./run -j "Run NRPE checks" -p PRGMR

This example shows running a specific pre-defined job, but it’s also equally possible to fire of adhoc commands to some or all of the machines rundeck knows about.

One thing in particular that I prefer about this approach to say using Capistrano or Fabric for remote execution tasks is that you have a centralised authentication and logging capability. It would be easy enough to encapsulate the jobs into cap or fabric tasks (and manage that in source control) which means you’re not stuck if Rundeck isn’t available.

On Her Majesty's Digital Service

This blog post is mainly an excuse to use the pun in the title. It’s also an opportunity to tell folks that don’t already know I’ll be starting a new job on Monday working for the UK Government. I’m going to be work for the Government Digital Service, a new department tasked with a pretty wide range of sorting the Government out online.

The opportunity is huge. And when it came around I couldn’t turn it down. I’m going to be working with a bunch of people I’ve known and respected for a while, as well as other equally smart people. That means I’m going to be back in London again as well.

Hopefully I’ll be able to talk lots about what we’re up to. The groundwork for that has already been laid by the alpha.gov team who have been blogging furiously about topics of interest.

Talking Configuration Management, Vagrant And Chef At Lrug

I stepped in at the last minute to do a talk at the last London Ruby User Group. From the feedback afterwards folks seemed to enyoy it and I certainly had fun. Thanks to everyone who came along.

As well as the slides the nice Skills Matter folks have already uploaded the videos from the night.