Setting Puppet Class Using Environment Variables

I’m not sure how novel this approach is but a few folks at work hadn’t seen it before so I thought it worth jotting down.

If you have even a small but dynamic set of servers then a problem arises with how those nodes are defined in puppet. A node remember is defined in puppet like so:

node web3.example.com {
  include web_server
}

The problem is twofold. If you have a growing infrastructure, that list of nodes is going to get quickly out of hand. The other problem is around provisioning new hosts, the obvious approach to which is something like:

  1. Summon new EC2 instance
  2. Change the node definition to include the new hostname
  3. Install puppet on instance and so the ssl certificate signing dance
  4. Run puppet

Step 2 stands out. The others are easily automated, but do you want to automate a change to your puppet manifests and a redeploy to the puppetmaster for a new instance? Probably not. Puppet has the concept of an external node classifier which can be used to solve this problem, but another simpler approach is to use an environment variable on the new machine.

Lets say we define our nodes something like this instead:

{% codeblock %}node default { case $machine_role { frontend: { include web_server } backend: { include app_server } data: { include db_server } monitoring: { include monitoring_server } development: { include development } default: { include base } } }{% endcodeblock %}

If a machine runs and sets the $machine_role variable to frontend it includes the web_server class, if that variable equals ‘data’ it’s going to include the db_server class instead. Much cleaner and more maintainable in my view. Now to set that variable.

Facter is the tool used by Puppet to get system information like the operating system or processor count. You can use these facter provided variables anywhere in your manifests. And one way of adding a new fact is via an environment variable on the client. Any environment variable prefixed with FACTER_ will be available in Puppet manifests. So in this case we can:

export FACTER_machine_role=frontend

So our steps from above become something like:

  1. Summon new machine
  2. echo “export FACTER_machine_role=backend” >> /etc/environment
  3. Install puppet on instance and so the ssl certificate signing dance
  4. Run puppet

Much easier to automate. And if you’re looking at a box and want to know what it’s role is you can check the relevant environment variable.

Jenkins Parameterized Builds

I’m a huge Jenkins fan now, but that wasn’t always the case. I started (and still have a soft spot for) Cruise Control, mainly building .NET and PHP applications. I then jumped to much simpler projects like Integrity mainly for Python and Ruby projects. I reasoned I didn’t need the complexity of Cruise or Hudson, I just wanted to be able to run my tests on a remote machine and have something go green or red. I then worked out that wasn’t quite the case, and ended up committing webhook like functionality to Integrity so I could chain builds together. And then I eventually tried Jenkins and found it’s power mixed with flexibility won me over. That’s really all just context, but hopefully explains a little about why I like a few Jenkins features in particular, one of which is Parameterized builds.

The Jenkins wiki describes this by saying:

Sometimes, it is useful/necessary to have your builds take several “parameters.”

But then goes onto a usecase that probably won’t mean much to dynamic language folk. This is one failing of much of the documentation around Jenkins, it often feels geared towards developers of certain languages when in reality the tool is useful everywhere. The important take away here is that builds can take arguments, which can have default values. Here’s an example:

Imagine we have a build which runs a set of simple tests against a live system . And further imagine that said system is composed of a number of different web services. Oh, and we’re running a few different parrallel versions of the entire system for testing and staging purposes. We could have one Jenkins job per application/environment combination. Or we could have one parameterized build.

Lets first specify our parameters from the Configure build screen of our Job.

Here we’re specifying a TARGET_APPLICATION and TARGET_PLATFORM parameter. These are going to turn into environment variables we can use in our build steps. We can specify default values for these if we like too. I’m just using strings here, but I could also use a select box or file dialog, or other options provided by various plugins.

Now when we hit the build button, instead of the build just starting, we’re propted for these values.

So with our new build if we want it to run against the staging environment and just for the foobar application we enter those values and hit build. That on it’s own can be used to drastically cut down on the number of individual builds you have to manage in Jenkins. And we’re not just restricted to text inputs, we can use boolean values or even prompt for file uploads at build time. But throw in a few plugins and things get even more interesting.

Jenkins has an overwhelming number of plugin available. If you haven’t spent the good half hour it takes to go down the list I’d highly recommend it. One of Jenkins best features is the ability to trigger a build after the successful run of another job. It allows you to chain things like test runs to integration deployments to smoke tests to production deploys. Basically your modern continuous deployment/delivery pipeline. The Build Pipeline plugin is excellent for visuallising this and introducing human gates if needed. Another useful plugin in this context is the Parameterized Trigger plugin. A limitation of Jenkins is that downstream builds can’t pass parameters, but this plugin works around that. Instead of ticking the Build other projects option you go for the Trigger parameterized build on other projects box. This allows you to select the project and to specify parameters to pass. This could be hard coded values, paramaters already passed into the pipeline, or things from other plugins like the git sha1 hash or subversion version number.

Combine all this together and it’s simple to have a per project continuous integration build running a test suite, kicking off a standard set of downsteam jobs for deploying to a test environment (by passing th relevant parameters), running some basic smoke tests and allowing someone to click a button to deploy to production. Or going the whole continuous deployment, I trust my test suite route, and deploying automatically. All within Jenkins. Getting this working requires a bit of planning. You want all of your projects to be deployed the same way but you probably want this to be the case anyway.

Providing flexible push button builds/deploys and reducing the number of nearly identical jobs in Jenkins are just two advantages to using parameterized builds. Most of the tricks come from thinking about Jenkins as much more than a continuous integration tool and more of an automation framework - I know at least one large organisation who have pretty much replaced cron for many tasks with Jenkins for instance. Running tests automatically, and in a central environment as close to production as possible, is important. But it’s just a sanity check if you’re doing everything right already. Centralising activity on a build pipeline requires you to be doing all that anyway, but in my opinion gives way more useful and rapid feedback about the state of the code your team is writing.

Exposing Puppet And Facter Information On The Web

I don’t appear to have been in a writing mood recently but I’ve been getting back into hacking on a couple of pet projects. The first fruits of this coding (mainly backwards and forwards on the train) I’ve just made available to anyone interested.

Web Facter is a gem which takes the output from Facter and exposes this as JSON over HTTP. In theory you could run this on a configurable port on each of your machines and have a URL you can hit to get information on uptime, networking setup, hostnames or anything else exposed by Facter. It comes with a simple built-in web server and optional http basic authentication if you’re not going to do this via a proxy. The JSON display should be both human and machine readable, and I have a few ideas for projects which needed this information.

The other project is very similar, and even has a similar name, Web Puppet. You can run this on your puppet master and it exposes the node information (currently including the facts and tags) again as JSON over HTTP. I’m still working on this to make it a little more usable. At the moment it just shows you all nodes and all information, if you’re working with a larger cluster this isn’t really sensible. Recent versions of Puppet do have an HTTP based API but it requires some hoops to be jumped through and I’m not quite sure from the docs it lets me do what I want (I have a specific usecase, of which more soon all being well).

Both projects have had me reading the source code of Puppet and Facter, which for the most part has been enjoyable and informative. Puppet in particular has some great comments lying around :) Both of the above projects are available as gems for anyone else to play around with and build on, but my main aim is a little more high level. All being well I’ll have a couple of projects built atop these APIs shortly.

Javascript In Your Ruby: Mongoid Map Reduce

We’re pretty fond of Mongodb at work and I’ve been getting an opportunity to kick some of the more interesting tyres recently. I thought I’d document something I found myself doing here, half hoping it might be useful for anyone else with a similar problem and also to see if anyone else has a much neater approach. The examples are obviously pretty trivial, but hopefully you get the idea.

So, we’re making using of the rather nice Mongoid Ruby library for defining our models as Ruby classes. Here’s a couple of very simple classes. Anyone familiar with DataMapper or Django’s ORM should be right at home here.

class Publication
  include Mongoid::Document

  field :name,            :type => String
  field :section,         :type => String
  field :body,            :type => String
  field :is_published,    :type => Boolean
end

class LongerPublication < Publication
  field :extra_body,      :type => String
end

So we now have a good few publications and longer publications in our system. And folks have been creating sections with wild amandon. What I’d like to do now is do some reporting, specifically I want to know the numbers of Publications by type and publication status. And lets allow a breakdown by section while we’re at it.

One approach to this is using Mongo’s built in map-reduce capability. Mongoid exposes this pretty cleanly in my view, by allowing you to write the required javascript functions (a mapper and a reducer) inline in the Ruby code. This might feel evil, but seems the best of the available options. I can see for much larger functions that splitting this out into separate javascript files for ease of testing might be nice, but were you can just test the input/output of the whole job this works for me.

KLASS = "this._type"
SECTION = "this.section"

def self.count_by(type)
  map = <<EOF
    function() {
      function truthy(value) {
        return (value == true) ? 1 : 0;
      }
      emit(#{type}, {type: #{type}, count: 1, published: truthy(this.is_published)})
    }
EOF

  reduce = <<EOF
    function(key, values) {
      var count = 0; published = 0;
      values.forEach(function(doc) {
        count += parseInt(doc.count);
        published += parseInt(doc.published);
        type = doc.type
      );
      return {type: type, count: count, published: published}
    }
EOF

  collection.mapreduce(map, reduce).find()

end

In our case that will return something like the following, or rather more specifically it will return a Mongo::Cursor that allows you to get at the following data.

[{"_id"=>"Publication", "value"=>{"type"=>"Publication", "count"=>42.0, "published"=>29.0}},
{"_id"=>"LongerPublication", "value"=>{"type"=>"LongerPublication", "count"=>12.0, "published"=>10.0}}]

I’ve been pretty impressed with both Mongo and Mongoid here. I like the feel of mapreduce jobs for this sort of reporting task. In particular it’s suprising how writing two languages mixed together like this doesn’t really affect the readability of the code in my view. Given that with a relational database you’d probably be writing SQL anyway maybe that’s not that suprising - the syntactic differences between Javascript and Ruby are much smaller than pretty much anything and SQL. Lots of folks have written about the increase of polyglot programming, but I wonder if we’ll see an increase in the embedding of one language in another?

Rundeck And Nagios Nrpe Checks

I’ve been playing with Rundeck recently. For those that haven’t seen it yet it’s an application for running commands across a cluster of machines and recording the results. It has both a command line client and a very rich web interface which boths allows you to trigger commands and shows the results.

I’ve played with a few different jobs so far, including triggering Puppet runs across machines triggered by a Jenkins plugin. I’ve also been looking at running all my monitoring tasks at the click of a button (or again as part of a smoke test triggered by Jenkins) and I thought that might make a nice simple example.

My checks are written as Nagios plugins, and run periodically by Nagios. I also trigger them manually, using Dean’s NRPE runner script.

Rundeck showing a job output

The above shows a successful run across a few machines I use for experimenting with tools. Hopefully you can see the summary of the run on each of the four machines, each ran five NRPE checks and all passed. On failure we’d see the results as well as different symbols and colous. We can easily save the output to a file if we need to, rerun or duplicate the job (maybe to have it run against a different group of machines) or we can export the job definition file to load into another instance.

The same job can also be run on the command line (which makes use the of Rundeck API)

./run -j "Run NRPE checks" -p PRGMR

This example shows running a specific pre-defined job, but it’s also equally possible to fire of adhoc commands to some or all of the machines rundeck knows about.

One thing in particular that I prefer about this approach to say using Capistrano or Fabric for remote execution tasks is that you have a centralised authentication and logging capability. It would be easy enough to encapsulate the jobs into cap or fabric tasks (and manage that in source control) which means you’re not stuck if Rundeck isn’t available.

On Her Majesty's Digital Service

This blog post is mainly an excuse to use the pun in the title. It’s also an opportunity to tell folks that don’t already know I’ll be starting a new job on Monday working for the UK Government. I’m going to be work for the Government Digital Service, a new department tasked with a pretty wide range of sorting the Government out online.

The opportunity is huge. And when it came around I couldn’t turn it down. I’m going to be working with a bunch of people I’ve known and respected for a while, as well as other equally smart people. That means I’m going to be back in London again as well.

Hopefully I’ll be able to talk lots about what we’re up to. The groundwork for that has already been laid by the alpha.gov team who have been blogging furiously about topics of interest.

Talking Configuration Management, Vagrant And Chef At Lrug

I stepped in at the last minute to do a talk at the last London Ruby User Group. From the feedback afterwards folks seemed to enyoy it and I certainly had fun. Thanks to everyone who came along.

As well as the slides the nice Skills Matter folks have already uploaded the videos from the night.

Vim With Ruby Support Using Homebrew

I’ve spend a bit of time this weekend cleaning, tidying and upgrading software on my mac. While doing that I got round to compiling my own Vim. I’d been meaning to do this for a while, I prefer using Vim in a terminal to using MacVim, and I like having access to things like Command-T which requires Ruby support which the inbuild version lacks.

Vim isn’t in Homebrew, because Homebrew’s policy is to not provide duplicates of already installed software. Enter Homebrew Alt which provides formulas for anything not allowed by the homebrew policy. As luck would have it a Vim Formula already exists. And installing from it couldn’t be easier.

brew install https://raw.github.com/adamv/homebrew-alt/master/duplicates/vim.rb

As it turns out this failed the first time I ran it because I had an rvm installed Ruby on my path. I reset this to the system version and everything compiled fine.

rvm use system

Note also that it’s really quite simple to use a different revision or different flags when compiling. Just download that file, modify it, serve it locally (say with a python one line web server) and point brew install at it. Next step, running off head for all the latest and greatest Vim features.

Jenkins Build Pipeline Example

The idea of a build pipeline for web application deployment appears to have picked up lots of interest from the excellent Continuous Delivery book. Inspired by that, some nice folks have build an excellent plugin for Jenkins unsurprisingly called the Build Pipeline Plugin. Here’s a quick example of how I’m using it for one of my projects*.

Build pipeline example in Jenkins

The pipeline is really just a visualisation of up and downstream builds in Jenkins given a starting point, plus the ability to setup manual steps rather than just the default build after ones. That means the steps are completely up to you and your project. In this case I’m using:

  1. Build - downloads the latest code and any dependencies. You could also create a system package here if you like. If successful triggers…
  2. Staging deploy - In this case I’m using capistrano, but it could easily have been rsync, fabric or triggering a chef or puppet run. If successful triggers…
  3. Staging test - This is a simple automated test suite that checks that the site on staging is correct. The tests are bundled with the code, so are pulled down as part of the build step. If the tests pass…
  4. Staging approval - This is one of the clever parts of the plugin. This jenkins job actually does nothing except log it’s successful activation. It’s only run when I press the Trigger button on the pipeline view. This acts as a nice manual gate for a once over check on staging.
  5. Production deploy - using the same artifact as deployed to staging this job triggers the deploy to the production site again via capistrano

I’m triggering builds on each commit too via a webhook. But I can also kick off a build by clicking the button the pipeline view if I need to.

Pipeline example showing build in progress

Note that I’m only allowing the last build to be deployed given only that one can be checked on staging. Again this is configuration specific to my usage, the plugins lets you operate a number of different ways. There are a number of tweaks I want to make to this, mainly around experimenting with parameterized builds to pass useful information downstream and even allow parrallel execution. For the moment I have the Block build when upstream project is building flag checked on the deploy.

 * Yes, this is a one page site. With a 5 step build process in Jenkins including a small suite of functional tests and a staging environment. This is what we call overengineering.

Varnish At Refresh Cambridge

I did a quick lightning talk at the Refresh Cambridge meetup last night, a very quick introduction to Varnish. Given 10 minutes all I really wanted to do was get people to go away and take a look at it. Lots of folks in the room hadn’t come across it before so I think the talk was hopefully well pitched.

Several people asked about slighly dynamic pages and I only got chance to mention support for ESI (Edge Side Includes) at the end. Conversation afterwards turned to various parts of the modern web stack and I had a pretty good time being opinionated. Hopefully more of the same next month.