Automating windows development environments

My job at Puppet Labs has given me an excuse to take a closer look at the advancements in Windows automation, in particular Chocolatey and BoxStarter. The following is very much a work in progress but it’s hopefully useful for a few things:

  • If like me you’ve mainly been doing non-Windows development for a while it’s interesting to see what is possible
  • If you’re starting out with infrastructure development on Windows the following could be a good starting place
  • if you’re an experienced Windows pro then you can let me know of any improvements

All that’s needed is to run the following from a CMD or Powershell prompt on a new Windows machine (you can also visit the URL in Internet Explorer if you prefer).


This launches BoxStarter, which executes the following code:

This takes a while as it runs Windows update and repeatedly reboots the machine. But once completed you’ll have the listed software installed and configured on a newly up-to-date Windows machine.

Docker, Puppet and shared volumes

During one of the openspace sessions at Devopsdays we talked about docker and configuration management, and one of the things we touched on was using dockers shared volumes support. This is easier to explain with an example.

First, lets create a docker image to run puppet. I’m also installing r10k for managing third party modules.


FROM ubuntu:trusty

RUN apt-get update -q
RUN apt-get install -qy wget
RUN wget
RUN dpkg -i puppetlabs-release-trusty.deb
RUN apt-get update

RUN apt-get install -y puppet ruby1.9.3 build-essential git-core
RUN echo "gem: --no-ri --no-rdoc" > ~/.gemrc
RUN gem install r10k

Lets build that and tag it locally. Feel free to use whatever name you like here.

docker build -t garethr/puppet .

Lets now use that image as a base for another image.

FROM garethr/puppet

RUN mkdir /etc/shared
ADD Puppetfile /
RUN r10k puppetfile check
RUN r10k puppetfile install
ADD init.pp /
CMD ["puppet", "apply", "--modulepath=/modules", "/init.pp","--verbose", "--show_diff"]

This image will be used to create containers that we intend to run. Here we’re including a Puppetfile (a list of module dependencies) and then running r10k to download those dependencies. Finally we add a simple puppetfile (this would likely be an entire manifests directory in most cases). The final line means that when we run a container based on this image it will run puppet and then exit.

Again lets build the image and tag it.

docker build -t garethr/puppetshared .

Just as a demo, here’s a sample Puppetfile which includes the puppetlabs stdlib module.


mod 'puppetlabs/stdlib'

And again as an example here’s a simple puppet init.pp file. All we’re doing is creating a file at a specific location.

file { '/etc/shared/client':
  ensure => directory,

file { '/etc/shared/client/apache.conf':
  ensure  => present,
  content => "not a real config file",


Fig is a tool to declare container types in a text file, and then run and manage them from a simple CLI. We could do all this with straigh docker calls too.

  image: garethr/puppetshared
    - /etc/shared:/etc/shared:rw

  image: ubuntu:trusty
    - /etc/shared/client:/etc/:ro
  command: '/bin/sh -c "while true; do echo hello world; sleep 1; done"'

The important part of the above is the volumes lines. What we’re doing here is:

  • Sharing the /etc/shared directory on the host with the container called master. The container will be able to write to the host filesystem.
  • Sharing a subdirectory of of /etc/shared with the client container. The client can only read this information.

Note the client container here isn’t running Puppet. Here it’s just running sleep in a loop to simulate a long running process like your custom application.

Let’s run the master. Note that this will run puppet and then exit. But with the above manifest it will create a config file on the host.

fig run master

Then run the client. This won’t exit and should just print hello world to stdout.

fig run client

Docker 1.3 adds the handy exec command, which allows for one-off commands to be executed within a running container. Lets use that to see our new config file.

docker exec puppetshared_client_run_1 cat /etc/apache.conf

This should output the contents of the file we created by running the master container.


This is obviously a very simple example but I think it’s interesting for a few reasons.

  • We have completely separated our code (in the container) from the configuration
  • We get to use familiar tools for managing the configuration in a familiar way

It also raises a few problems:

  • The host needs to know what types of container are going to run on it, in order to have the correct configuration. If you’re using Puppet module then this is simple enough to solve.
  • The host ends up with all of the configuration for all the containers in one place, you could also do things with encrypting the data and having the relevant keys in one image and not others. Given how if you’re on the host you own the container anyway this isn’t as odd as it sounds.
  • We’re just demonstrating files here, but if we change our manifest and rerun the puppet container then we change the config files. But depending on the application it won’t pick that up unless we restart it or create a new container.

Given enough time I may try build a reference implementation using this approach, anyone with ideas about that let me know.

This post was inspired by a conversation with Kelsey and John, thanks guys.

Using Puppet with key/value config stores

I like the central idea behind storing configuration in something like Etcd rather than lots of files on lots of disks, but a few challenges still remain. Things that spring to mind are:

  • Are all your passwords now available to all of your nodes?
  • How do I know when configuration changed and who changed it?

I’ll leave the first of those for today (although have a look at Conjur as one approach to this). For the second, I’m quite fond of plain text, pull requests and a well tested deployment pipeline. Before Etcd (or Consul or similar) you would probably have values in Hiera or Data Bags or similar and inject them into files on hosts using your configuration management tool of choice. So lets just do the same with our new-fangled distributed configuration store.

key_value_config { '/foo':
  ensure   => present,
  provider => etcd,
  value    => 'bar',

Say you wanted to switch over to using Consul instead? Just switch the provider.

key_value_config { '/foo':
  ensure   => present,
  provider => consul,
  value    => 'bar',

You’d probably move all of that out into something like hiera, and then generate the above resources, but you get the idea.

  foo: bar

The above is implemented in a very simple proof of concept Puppet module. Anyone with any feedback please do let me know.

Leaving GDS never easy

The following is the email I sent to lots of my colleagues at the Government Digital Service last week.

So, after 3 rather exciting years I’ve decided to leave GDS.

That’s surprisingly difficult to write if I’m honest.

I was part of the team that built and shipped the beta of GOV.UK. Since then I’ve worked across half of what has become GDS, equally helping and frustrating (then hopefully helping) lots of you. I’ve done a great deal and learnt even more, as well as collected arcane knowledge about government infosec and the dark arts of procurement along the way. I’ve done, and helped others do, work I wouldn’t even have thought possible (or maybe likely is the right word?) when I started.

So why leave? For all the other things I do I’m basically a tool builder. So I’m going off to work for Puppet Labs to build infrastructure tools that don’t suck. That sort of pretty specific work is something that should be done outside government in my opinion. I’m a pretty firm believer in “government should only do what only government can do” (design principle number 2 for those that haven’t memorised them yet). And if I’m honest, focusing on something smaller than fixing a country’s civic infrastructure is appealing for now.

I’ll let you in on a secret; I didn’t join what became GDS because of the GOV.UK project. I joined to work with friends I’d not yet had the chance to work with and to see the inside of a growing organisation from the start. I remember Tom Loosemore promising me we’d be 200 people in 3 years! As far as anyone knows we’re 650+ people. That’s about a person a day for 2 years. I’m absolutely not saying that came without a cost, but for me being part of that that was part of the point - so I can be a little accepting with hindsight.

For me, apart from all the short term things (side-note: this job now has me thinking £10million is a small amount of money and 2 years is a small amount of time) there is one big mission:

Make government a sustainable part of the UK tech community

That means in 10 years time experienced tech people from across the country, as well as people straight from university, choosing to work for government. Not just for some abstract and personal reason (though that’s fine too), but because it’s a a genuinely interesting place to work. That one’s on us.

Using OWASP ZAP from the command line

I’m a big fan of OWASP ZAP or the Zed Attack Proxy. It’s suprisingly user friendly and nicely pulls of it’s aim of being useful to developers as well as more hardcore penetration testers.

One of the features I’m particularly fond of is the aforementioned proxy. Basically it can act as a transparent HTTP proxy, recording the traffic, and then analyse that to conduct various active security tests; looking for XSS issues or directory traversal vulnerabilities for instance. The simplest way of seeding the ZAP with something to analyse is using the simple inbuilt spider.

So far, so good. Unfortunately ZAP isn’t designed to be used from the command line. It’s either a thick client, or it’s a proxy with a simple API. Enter Zapr.

Zapr is a pretty simple wrapper around the ZAP API (using the owasp_zap library under the hood). All it does is:

  • Launch the proxy in headless mode
  • Trigger the spider
  • Launch various attacks against the collected URLs
  • Print out the results

This is fairly limited, in that a spider isn’t going to work particularly well for a mor interactive application, but it’s a farily good starting point. I may add different seed methods in the future (or would happily accept pull requests). Usage wise it’s as simple as:

zapr --summary http://localhost:3000/

That will print you out something like the following, assuming it finds an issue.

| Alert                             | Risk     | URL                                    |
| Cross Site Scripting (Reflected)  | High     |http://localhost:3000/forgot_password   |

The above alert is taken from a simple example, using the RailsGoat vulnerable web application as a scape goat. You can see the resulting output from Travis running the tests.

Zapr is a bit of a proof of concept so it’s not particularly robust or well tested. Depending on usage and interest I may tidy it up and extend it, or I may leave it as a useful experiment and try and finally get ZAP support into Gauntlt, only time will tell.

Consul, DNS and Dnsmasq

While at Craft I decided to have a quick look at Consul, a new service discovery framework with a few intersting features. One of the main selling points is a DNS interface with a nice API. The Introduction shows how to use this via the dig command line tool, but how do you use a custom internal DNS server without modifying all your applications? One answer to this question is Dnsmasq.

I’m not explaining Consul here, the above mentioned introduction does a good job of stepping through the setup. The following assumes you have installed and started consul.

Installation and configuration

I’m running these examples on an Ubuntu 14.04 machine, but dnsmasq should be available and packaged for lots of different operating systems.

apt-get install dnsmasq

Once installed we can create a very simple configuration.

echo "server=/consul/" > /etc/dnsmasq.d/10-consul

All we’re doing here is specifying that DNS requests for consul services are to be dealt with by the DNS server at on port 8600. Unless you’ve changed the consul defaults this should work.

Just in case you prefer Puppet their is already a handy dnsmasq module. The resulting puppet code then looks like this.

include dnsmasq
dnsmasq::conf { 'consul':
  ensure  => present,
  content => 'server=/consul/',


The examples from the main documentation specify a custom DNS server for dig like so:

dig @ -p 8600 web.service.consul

With Dnsmasq installed and configured as above you should just be able to do the following:

dig web.service.consul

And now any of your existing applications will be able to use your consul instance for service discovery via DNS.

Testing Vagrant runs with Cucumber

I’ve been a big fan of Vagrant since it’s initial release and still find myself using it for various tasks.

Recently I’ve been using it to test collections of Puppet modules. For a single host vagrant-serverspec is excellent. Simply install the plugin, add a provisioner and write your serverspec tests. The serverspec provisioner looks like the following:

config.vm.provision :serverspec do |spec|
  spec.pattern = '*_spec.rb'

But I also found myself wanting to test behaviour from the host (serverspec tests are run on the guest), and also wanted to write tests that checked the behaviour of a multi-box setup. I started by simply writing some Cucumber tests which I ran locally, but I decided I wanted this integrated with vagrant. Enter vagrant-cucumber-host. This implements a new vagrant provisioner which runs a set of cucumber features locally.

config.vm.provision :cucumber do |cucumber|
  cucumber.features = []

Just drop your features in the features folder and run vagrant provision. If you just want to run the cucumber features, without any of the other provisioners running you can use:

vagrant provision --provision-with cucumber

Another advantage of writing this as a vagrant plugin is that it uses the Ruby bundled with vagrant, meaning you just install the plugin rather than faff about with a local Ruby install.

A couple of other vagrant plugins that I’ve used to make the testing setup easier are vagrant-hostsupdater and vagrant-hosts. Both help with managing hosts files, which makes writing tests without knowing the IP addresses easier.

Buy vs Build your Monitoring System

At the excellent London Devops meetup last week I asked what was apparently a controversial question:

should you just use software as a service monitoring products rather than integrate lots of open source tools?

This got a few people worked up and I promised a blog post.

Note that I wrote a post listing lots of open source monitoring tools not that long ago. And I’ve been to both the Monitorama events about open source monitoring. And have a bunch of Puppet modules for open source monitoring tools. I’m a fan of both open source and of open source monitoring. Please don’t read this as an attack on either, and particularly on the work of awesome people working on great open source monitoring products.

Some assumptions

  1. No one product exists that does everything. I think this is true for SaaS as much as for open source.
  2. Lets work with about 200 hosts. This is a somewhat arbitrary number I know, some people will have more and others less.
  3. If it saves money we’ll pay yearly, rather than monthly or hourly.
  4. We could probably get some volume discounts from some of the suppliers, but we’ll use list prices for this post.

Show me the money

So what would it cost to get up and running with a state of the art software as a service monitoring system? In order to do this we need to choose our software. For this post that means I’m going to pick products I’ve used (sometimes only a bit) and like. This isn’t a comprehensive study of all the alternatives I’m afraid - though feel free to write your own alternative blog posts.

  • New Relic provides a crazy amount of data about the running of both your servers and your applications. This includes application performance data, errors, low level metrics and even rolled up method or database query performance. $149 per host per month for our 200 hosts gives us $29,800 per month.

  • Librato Metrics provides a fantastic way of storing arbitrary time series data. We’re already storing lots of data in New Relic but Metrics provides us with less opinionated software so we can use it for anything, for instance number of logins or searches or other business level metrics. We’ll go for a plan with 200 data sources, 100 metrics each and at 10 second resolution for a cost of $3,860 per month.

  • Pagerduty is all about the alerts side of monitoring. Most of the other SaaS tools we’ve chosen integrate with it so we can make sure we get actionable emails and SMS messages to the right people at the right time. Our plan costs $18 per person per month, so lets say we have 30 people at a cost of $540 per month.

  • Papertrail is all about logs. Simple setup your servers with syslog and Papertrail will collect, analyze and store all your log messages. You get a browser based interface, search tools and the ability to setup alerts. We like lots of logs so we’ll have a plan for 2 weeks of search, 1 year archive and 100GB month of log traffic. That all costs $575 per month.

  • Sentry is all about exceptions. We could be simply logging these and sending them to Papertrail but Sentry provides tools for tracking and rolling up occurences. We’ll go for a plan with 90 days of history and 200 events per minute at a cost of $199 a month.

  • Pingdom used to provide a very simple external check service, but now they have added more complex multistage checks as well as real user monitoring to the basic ping. We’ll choose the plan with 250 checks, 20 Real User Monitoring sites and 500 SMS alerts for $107 a month.

How much!

In total that all comes to $35,080 (£20,922) per month, or $420,960 (£251,062) per year.

Now the first reaction of lots of people will be that’s a lot of money and it is. But remember open source isn’t free either. We need to pay for:

  • The servers we run our monitoring software on
  • The people to operate those servers
  • The people to install and configure our monitoring software
  • The office space and other costs of employing people (like management and hiring)

I think people with the ability to build software tend to forget they are expensive, whether as a contractor or as a full time member of staff. And people without management experience tend to forget costs like insurance, rent, management overhead, recruitment, etc.

And probably more important than these for some people we need to consider:

  • The time taken to build a good open source monitoring system

The time needed to put together a good monitoring stack based on for instance logstash, kibana, riemann, sensu, graphite and collectd isn’t small. And don’t forget the number of other moving parts like redis, rabbitmq and elasticsearch that need installing configuring and maintaining. That probably means compromising in the short term or shipping later. In a small team how core is building your monitoring stack to what you do as a business?

But I can’t use SaaS

For some people, using a software as a service product just isn’t going to cut it. Here’s a list of reasons I can think of:

  • Regulation constrains where your data can be stored, for instance it’s not allowed out of the country
  • Sheer size of infrastructure, although you may be able to get a volume discount it might not be enough

I think everything else is a cost/benefit issue or personal preference (or bias). Happy to add more to that list, but I don’t think it’s a very long list.


I’ve purposefully not talked about the quality of the tools here, just the cost. I’ve also not mentioned that it’s likely not an all or nothing decision, lots of people will mix SaaS products and open source tools.

Whether taking a SaaS approach will be quicker, cheaper or better will depend on your specific business context. But try and make that about the organisation and not about the technology.

If you’ve never used the current crop of SaaS monitoring tools (and not just the one’s mentioned above) then I think you’re missing out. Even if you stick with a mainly open source monitoring stack you might look at your tools a bit differently after you’ve experimented with some of the commercial competition.

A template for Puppet modules

A little while ago I published a template writing your own puppet modules. It’s very opinionated but comes out of the box with lots of the tools you eventually find and add to your tool box. I’m posting this as it came up at the recent Configuration Management Camp and after discussing it I realised I hadn’t actually wrote anything about it anywhere.

What do you get?

  • A simple install, config, service class pattern
  • Unit tests with rspec-puppet
  • Rake tasks for linting and syntax checking
  • Integration tests using Beaker
  • A Modulefile to provide Forge metadata
  • Command line tools to upload to the Forge with blacksmith
  • A README based on the Puppetlabs documentation standards
  • Travis CI configuration based on the official Puppetlabs support matrix
  • A Guardfile which can run all the tests when you change manifests

Obviously you can choose not to use parts of this, or even delete aspects, but I find that approach much quicker than starting from scratch or copying files from previous modules and changing names.

How can I use it?

Simple. The following will install the module skeleton to ~/.puppet/var/puppet-module/skeleton. This turns out to be picked up by the Puppet module tool.

git clone 
cd puppet-module-skeleton
find skeleton -type f | git checkout-index --stdin --force --prefix="$HOME/.puppet/var/puppet-module/" --

With that in place you can then just run the following to create a new module, where puppet-ntp is the name of our new module.

puppet module generate puppet-ntp

We use puppet module like this rather than just copying the files because otherwise you would have to rename everything from class names to test assertions. The skeleton actually contains erb templates in places, and running puppet module generate results in the module name being available to those templates.

Now what?

Assuming you have run the above commands you should have a folder called puppet-ntp in your current directory. cd into that and then install the dependencies:

bundle install

Bundler is a dependency manager for Ruby. If you don’t already have it installed you should be able to do so with the following:

gem install bundler

Now you have the dependencies why not run the full test suite? This checks syntax, lints the Puppet code and runs the unit tests.

bundle exec rake test

Unit tests give fast feedback and help make sure the code you write is going to do what you intend, but they aren’t actually applying the manifests to a real machine. For that you want an integration test. You’ll need Vagrant installed for this next step. Lets run those as well with:

bundle exec rspec spec/acceptance

This will take a while, especially the first time. This uses Beaker to download a virtual machine from Puppetlabs (if you don’t already have it) and then brings up a new machine, applies a simple manifest, runs the acceptance tests and then destroys the machine.

The file has more information for running the test suite.

What’s new?

I’ve recently added a Guardfile to help with testing. You can run this with:

bundle exec guard

Now in a separate tab or pane make a change to any of the manifests. The tests should run automatically in the tab or pane where guard is running.

Can you add this new tool?

Probably. Although I started the repo a few other people have contributed code or made improvements already. Just sent a pull request or open an issue.

Code coverage for Puppet modules

One of my favourite topics for a while now has been infrastructure as code. Part of that involves introducing well understood programming techniques to infrastructure - from test driven design, to refactoring and version control. One tool I’m fond of (even with it’s potential to be misused) is code coverage. I’d been meaning to go code spelunking to see if this could be done for testing Puppet modules.

The functionality is now in master for rspec-puppet and so anyone feeling brave can use it now, or if you must wait for the 2.0.0 release. The actual implementation is inspired by the same functionality in ChefSpec written by Seth Vargo. Lots of the how came from here, and the usage is very similar.

How to use it?

First add (or hopefully change) your Gemfile line item for rspec-puppet to the following:

gem "rspec-puppet", :git => ''

Then all you need to do is include the following line anywhere in a spec.rb file in your spec directory.

at_exit {! }

What do I get?

Here’s an example module, including a file called coverage_spec.rb. When running the test suite with rake spec you now get coverage details like so:

Total resources:   24
Touched resources: 8
Resource coverage: 33.33%

Untouched resources:
  Exec[Required packages: 'debian-keyring debian-archive-keyring' for nginx]
  Apt::Key[Add key: 7BD9BF62 from Apt::Source nginx]
  Anchor[apt::key/Add key: 7BD9BF62 from Apt::Source nginx]
  Anchor[apt::key 7BD9BF62 present]

Here’s the output on Travis CI as well for a recent build.

Why is this useful?

I’ve already found coverage useful when writing tests for a few of my puppet modules. The information about the total number of resouces is interesting (and potentially an indicator of complexity) but the list of untouched resources is the main useful part. These represent both information about what your module is doing, and potential things you might want to test.

I’m hoping to find some more time to make this even better, providing more information about untouched resources, adding some configuration options and hopefully to integrate with the Coveralls API.