Creating A Cucumber Nagios Package With Fpm

I’ve written before about why I like System Packages, but even I’ll admit that the barriers to creating them mean I don’t use them for everything. FPM however is making it much easier, to the point where I’m starting to create a few packages I can reuse on projects. I thought a write up of how I’m doing that for Cucumber-Nagios might be useful.

For those that haven’t seen it yet, FPM (or Effing Package Management) is a tool that helps you build packages, like RPMs and DEBs, quickly. It can take in gems, python packages, node.js npm files or just plain directories and make files and from that create you one or more system packages. Lets have a look at a full example with a Ruby gem.I really like using cucumber-nagios, whatever platform or language I happen to be using at the time. I have a number of Django projects for instance with cucumber-nagios checks, so being able to not worry about most of the Ruby install is useful.

In order to use FPM you’ll need to install it. It’s available as a Ruby gem so lets start there. I’m not going to delve into setting up a Ruby Gems environment as it’s annoying and covered for most platforms elsehere on the internet.

<code>gem install fpm</code>

First off lets install the cucumber-nagios gem, along with all it’s dependencies, into a local folder on my build machine. This might be a virtual machine or a separate machine in your cluster. It should be running the same OS as the intended production machines however. The following examples are from Ubuntu, but it’s much the same for RPM based distros.

<code>gem install --no-ri --no-rdoc --install-dir ~/tmp/gems cucumber-nagios</code>

Cucumber-nagios has a large number of dependencies, so we’re going to need to create packages for all of those too. FPM will cleverly deal with maintaining the specified dependencies thought. We’ll use find to do this quickly, and then instruct FPM to convert from a gem to a deb. You could obviously do this line by line if you prefer.

<code>find ~/tmp/gems/cache -name '*.gem' | xargs -rn1 fpm -s gem -t deb -a all</code>

That should leave us with lots of new .deb files that we can have a closer look at:

<code>dpkg --info rubygem-cucumber-nagios_0.9.0_i686.deb
dpkg -c rubygem-cucumber-nagios_0.9.0_i686.deb</code>

If you already have a private apt repository set up then just upload these packages and away you go. I’d like to use apt for installing them so I can leave it to manage all the dependencies easily. If not then I’ll show you briefly how to create a local file system repo, it’s not much more work to create a shared repo available over HTTP.

First create a directory to store our packages and move our newly created .deb files into it. You’ll need to be able to execute some of these commands as root but given the topic I’m assuming you’ll be able to deal with that.

<code>mkdir /usr/local/repo
cp *.all.deb /usr/local/repo</code>

For the next part you’ll need to install the dpkg development tools, and then create a file that can be read by apt when it’s looking for packages it can install.

<code>apt-get install dpkg-dev
dpkg-scanpackages /usr/local/repo /dev/null | gzip -9c > /usr/local/repo/Packages.gz</code>

Now add your new package repo to your apt sources and update your package cache.

<code>/etc/apt/sources.list
deb file:/usr/local/repo ./
apt-get update</code>

At this point everything should be up and running. Let’s do a search in our repo:

<code>apt-cache search rubygem-
rubygem-amqp - AMQP client implementation in Ruby/EventMachine
rubygem-builder - Builders for MarkUp.
rubygem-bundler - The best way to manage your application's dependencies
rubygem-cucumber - cucumber-0.10.2
rubygem-cucumber-nagios - Systems testing plugin for Nagios using Cucumber.
rubygem-diff-lcs - Provides a list of changes that represent the difference between two sequenced collections.
rubygem-eventmachine - Ruby/EventMachine library
rubygem-extlib - Support library for DataMapper and Merb
rubygem-gherkin - gherkin-2.3.6
rubygem-highline - HighLine is a high-level command-line IO library.
rubygem-json - JSON Implementation for Ruby
rubygem-mechanize - The Mechanize library is used for automating interaction with websites
rubygem-net-ssh - Net::SSH: a pure-Ruby implementation of the SSH2 client protocol.
rubygem-nokogiri - Nokogiri (鋸) is an HTML, XML, SAX, and Reader parser
rubygem-rack - a modular Ruby webserver interface
rubygem-rack-test - Simple testing API built on Rack
rubygem-rspec - rspec-2.5.0
rubygem-rspec-core - rspec-core-2.5.1
rubygem-rspec-expectations - rspec-expectations-2.5.0
rubygem-rspec-mocks - rspec-mocks-2.5.0
rubygem-templater - Templater has the ability to both copy files from A to B and also to render templates using ERB
rubygem-term-ansicolor - Ruby library that colors strings using ANSI escape sequences
rubygem-webrat - Ruby Acceptance Testing for Web applications</code>

And finally lets install cucumber-nagios from our shiny new package.

<code>apt-get install rubygem-cucumber-nagios</code>

If everything has worked out you should be able to run the cucumber-nagios-gen command to create a new project. Note that the path may be different, and in the case of debian based distros the gem bin path is not on the Path.

<code>/usr/lib/ruby/gems/1.8/bin/cucumber-nagios-gen project test
Generating with project generator:
     [ADDED]  .gitignore
     [ADDED]  .bzrignore
     [ADDED]  Gemfile
     [ADDED]  README
     [ADDED]  features/steps
     [ADDED]  features/support</code>

Devops Weekly Archive

Since I launched it back in November my Devops Weekly email has been pretty well received I think. Folks on twitter seem to like it, as do a few people I’ve met at recent events who have said nice things. One thing a few people have been after though is an online archive, either because email just doesn’t work for them or more often because they missed out on the earlier issues.

With a little time this weekend and the help of Jekyll I’ve just added the seventeen iss§ues to date to the new Devops Weekly archive. I’d have liked to do something a bit more useful, maybe introduce per link tags or a nifty search feature, but I’m pretty busy at present and reason this should serve the main purpose of getting these links online.

Collecting Metrics With Ganglia And Friends

I had the pleasure of speaking at Cambridge Geek Night on Monday again, the topic of conversation being using Ganglia to collect more than just base systems metrics.

The audience of web developers, the odd sysadmin and business folk seemed to enjoy it and we had lots of time for questions at the end. The main point I tried to get across was that Ganglia makes a great platform for ad-hoc metrics gathering due to the ability to just throw values at it and get time series graphs without any extra configuration.

The slides include a few bits of code I’ve been using that I’ll throw onto GitHub as a proper project when I get the time. These are very simple Python servers, one which allows for sending metrics information over HTTP, the other using TCP instead. Both really handy for getting more people to add hooks into applications.

Vagrant At The Guardian

As recent blog posts on here make clear, I’m a big fan of Vagrant. And when Michael asked if I’d fancy talking to some of his colleagues at The Guardian about how I use it I really couldn’t say no.

I gave a short talk, running through the following slides, and running a few demos showing creating, destroying and provisioning new machines.

More interesting I thought were the questions and conversations that followed. We talked a little about how Vagrant might fit into a continuous integration setup. Another aspect some of the systems folk took to was how flexible the network configuration was and whether they might be able to use this to more effectively test firewall configurations well before the final push into a production environment. It’s not something I’ve been doing but it sounds feasable and useful in some organisations. If anyone is doing interesting things with Vagrants network config I’d be interested to know.

Devops Isn't A Methodology

I was reading Devops is a poorly executed scam and just couldn’t resist a reply. Not because of the entertaining title, but because I both agree and disagree quite strongly with parts of the post. Read it first if you haven’t already. And yes I know I’m feeding the internet.

I’m going to pick parts of the post out and then comment. Hopefully I’m not quoting these in any way out of context.

“It’s got the potential to make a handful of people a lot of money in the same way that Agile did, but nobody is really executing on.”

People are pretty aware of this fact I think, but watch what happens when people post on the mailing lists or turn up at community events with a purely marketing hat on. They just get no traction and even damage their product brand amongst the early adopters. The fact the term is starting to get used in job adverts and marketing materials isn’t really being driven by the people talking about what devops might or might not be. I think the main reason for this is that most of the people I talk to in person or online are actually pretty happy with their jobs and generally work inside companies rather than as independent consultants. They have often reached an age where they want to improve within a given field but would like a wider network than their current colleagues to discuss things with.

“How do you implement Devops?”

I don’t think you do. The comparisons with Agile are interesting from a community point of view but Scrum is a methodology. To me at least devops isn’t, it’s just a banner or tag under which interesting conversations are happening. The argument that “You should be doing this anyway. Not earth shattering.” is a good thing. You’d be suprised by how many people don’t do all the things they should be doing, especially in small and young companies. And one of the reasons for that is no one bothered writing a list of these things down anywhere and then discussing them. I’m not saying this huge list exists or even whether it should, but the discussion is happening.

“The underlying problem, however, is that dev and ops have different goals”

This is spot on. I think this maybe does get missed in talk that focuses more on tools but not in the wider discussion happening about business improvements. Devops quite litterally brings those two things together. You’ll always have individual goals but where you have separate operations and development teams they should have the same fundamental goals.

“Developers develop in the same environment production runs in If you deploy to Linux, you develop on Linux. No more of this coding on your Macbook Pro and deploying to Ubuntu: that is why you can’t have nice things.”

Yes, yes and yes again. I’m definitely from the developer side of the tracks and I’m constantly telling people this and it’s definitely something I don’t see enough people doing. What I’d love is for all the operations people to state this to their development team and most importantly to help them set that up. Just saying work like me or I won’t let you near the production machines is just being obstructive. Educating and helping with tooling helps build those bridges and trust. And with trust comes the access the developers want. And less stupid bugs and less deployment issues with differing package dependencies.

“None of this amounts to a methodology, as the Devops people would have you believe.”

Still unsure which Devops people are saying it’s a bonefide methodology. I see the word used sometimes but generally in passing and not I don’t think meant how you mean here. And I don’t think I’ve heard people speak about it in person. “Scrum methodology” returns more than 113,000 results in Google. “Devops methodology” returns about 150, some of which 404 and half of which are agregators pointing at the other half.

“The Devops movement smells of a scam in the making”

Some company or other is definitely going to be scammed into paying over the odds for a consultant because they used the word Devops in the sales pitch. That will have next to nothing to do with what I’d see as the Devops movement and everything to do with human nature (and sales people).

A Continuous Deployment Example Setup

One of the reasons behind getting around to building Vagrantbox.es recently was I was giving a talk to a group of startups on The Difference Engine programme and I wanted to have an example project to demonstrate various things. I wanted to demonstrate everything from sensible version control habbits, configuration management, basic orcestration and most importantly a solid deployment process. I’ve decided to write up what I’m doing for deployment because I think it’s pretty nice, and for all the talk about Continuous Deployment I haven’t seen many examples of code and configuration to make it happen.

Most of what I’ll cover is pretty easy to map to whatever technologies your using. For this project I’d gone for Git, Django, Gunicorn, Nginx, Fabric, Mysql and Jenkins and I’m deploying to Ubuntu running on Brightbox Cloud. Apart from the Jenkins instance in the middle you could follow the instructions and swap things out easily.

Jenkins

First up lets install Jenkins. I setup a separate cloud instance just to run the Continuous Integration server. I find this approach easier to manage but you could always run this locally if you prefer. The Jenkins folk provide very up to date packages for Debian so I chose to use those.

Plugins

Jenkins provides a huge number of optional plugins which enable various additional features. Plugins are installed via the web interface at /pluginManager. I’ve installed:

Only the Git plugin is really required for what I’m doing with deployment. Cobertura and Violations are code quality metrics tools that I use to record output from pylint and code coverage for my test suite.

The Source

My finished project was already on GitHub in a private repository. I’m using a requirements.txt file to record python dependencies so I can use pip to install them automatically and I’m using Virtualenv to sandbox this installation. I’m also using South to manage my database schema changes. I won’t go into that here as it’s pretty Python specific, Rails for instance has Active Record migrations, RVM and Bundler which do pretty much the same job. PHP has PEAR and some of the frameworks offer a migration tool.

I then created two projects in Jenkins:

Jenkins dashboard

Project 1: Vagrantboxes

This is the main build of my master branch in Git. As well as setting up the Git repo as shown below I’ve set a polling schedule to /5 * * * (that’s every 5 minutes) and also set Trigger builds remotely so I can have a task in my fabfile which triggers a build immediately.

Git config for Jenkins

I then have two build steps, both of which execute shell commands. The first installs any new requirements via pip:

bash -l -c "source bin/activate; pip install -r requirements.txt"

The second runs my test suite and generates the XML output required to show the test results in Jenkins:

bash -l -c "source bin/activate; cd vagrantboxes/configs/common; python manage.py jenkins boxes"

I’m using the rather handy Django Jenkins application for this.

So far so good. This gives us a project that, when we push some changes to GitHub, will pull those changes down to the CI server and run our test suite, giving us feedback as to whether the tests pass or fail.

Now for the trick, in Post-build Actions tick Build other projects and specify the name of another project that we’ll setup next. Mine is called Vagrantboxes-deploy.

Post build action in Jenkins

Project 2: Vagrantboxes-deploy

This project is triggered only when the previous project runs successfully. And all it’s going to do is run the deployment script on the project we just built. The setup for this project is very simply, it has one build step which just executes the following:

bash -l -c "cd /var/lib/jenkins/jobs/Vagrantboxes/workspace; source bin/activate; fab appserver deploy"

The specifics of the Fabric script here aren’t that important but I’m doing something not too disimilar to what I described here.

The reason I’ve setup a separate project for these is so I can, if I choose, trigger a deployment separately to the full build, and also so I can very easily disable deployments even if the main build is still running.

Conclusions

With this setup whenever I push code to master it triggers a build. If the test suite passes it runs the deployment script and pushes out the code to the live web servers. This suites me and this project but you might find it easier to start by pushing all successfull builds out to a staging environment. And maybe then moving on to having a new project which is only triggered manually for deploying to production.

project view in Jenkins

This setup has other advantages too. The Jenkins dashboard becomes a handy tool for recording deployment events. You can easily setup emails or IM messages or Campfire posts to alert other team members whenever a deployment happens. And it really really makes sure your delployment scripts work without hand holding.

This is a simple project that I’m working on on my own, but in a team environment you’d likely have a more complex branching strategy and more Jenkins projects. You might also introduce some gateways for manual testing but the starting point is the same. Jenkins makes archiving successful build artifacts relatively easy as well, this setup has a few race condition possibilities that you can fix by building artifacts from successful builds. Jenkins also supports building from different branches and having different branches trigger different projects, all handy if you want to grow this kind of setup.

Site For Vagrant Base Boxes

A brief conversation with Matt Keating on Twitter finally pushed me over the edge and I’ve built a site I’d been meaning to do for a while.

I’m a huge Vagrant fan, but one thing that often comes up is where to find base boxes. My newly launched site Vagranbox.es provides just that. At the moment that just means user submitted boxes being checked and then posted. I’ll likely add comments and ratings and the like if things become popular but that’s for later.

vargrantbox.es homepage

So, if you know of or host a useful box please let me know. I’ll try to keep up with any submissions.

Devops - More Than Marketing - Talk By James Turnbull

I’ve just found my notes from James Turnbull’s talk at FOSDEM. I found the talk excellent, and I’m already part of the choir. But much of the audience I’d guess have only come across the devops term in passing, or worse had it pushed at them as part of marketing materials. Hopefully I captured the main points:

So what is devops all about?

  • Cooperation (between development and operations teams)
  • Buzzword bingo?
  • Pop culture movement?
  • Discussion
  • It’s early days
  • No one has all the answers
  • Nothing is fixed in stone
  • It’s all about outreach

It’s about

  • Simplicity - Repeatable, Reusable, Easy to communicate
  • Relationships - Engage early, engage often, “Toss it over the fence”, Talk to people
  • Process - Test everything, Automate everything, Redundancy and expectation of failure, Transparent and open to everyone

Tools

  • Not just ops tools - Config mnagement, Deployment and orchestration, Monitoring, Security, Testing
  • Use for entire lifecycle dev -> test -> ops
  • Not just dev tools - Version control, Agile, Application architecture
  • Testing methodology - Low level vs functional
  • Documentation - “The only time the network diagram is up to date is after the post mortem”

Continuous improvement

  • Nothing stands still - Customers, Products, Technology, Your team
  • Strike often, striek hard, be aggressive

It’s a culture change

  • This is Hard
  • People hate change
  • People hate people who introduce change
  • Fear of change is irrational - Listen, Concrete examples
  • Make developers resonsible for uptime - Pagers

FUD

  • “We’ve always done this”
  • “That can’t work here”
  • “This is all about one group or another”
  • “You’re an elitist bunch of Europeans”

Dangers

  • Marketing speak
  • Lip service
  • Disenchantment
  • Disenfranchisement

Takeaway

  • “Not about a person, or a team. About changing how your operations team works”
  • Automate away small boring repetative tasks to make time for interesting activities
  • Embed ops people into dev teams
  • Drag devs to ops standups
  • Build shared appreciation
  • Metrics conversations are really powerful

Configuration Management For Development Environments

I had the pleasure of speaking at Fosdem last weekend to a packed Configuration amd systems management devroom.

My presentation covered some of the same ground as recent blog posts, namely why you should be using virtualisation and config management tools to manage your local development environment.

People even said nice things about it:

@garethr basically has this subject completely covered. He’s even advocating the correct editor. excellent #fosdem talk

All in all another good event, I have notes about some of the other talks I went along to that I’ll try write up soon.

Using Checkinstall With Virtualenv For Python Deployments

Michael Brunton-Spall wrote last week about some frustrations with packagings and deploying Python web applications. Although his experience was with Python, the problems he describes are the same for Ruby and PHP and a whole host of languages. The following example uses Python, but works equally as well for anything else.

Michael has three simple rules for his servers:

  1. they cannot access the internet
  2. they cannot access internal services that are for development
  3. they cannot have compilers / utilities on them

I won’t go into all the reasons for doing this (you can read the blog post linked to above) but these are pretty sensible security precautions.

My approach to this problem would be to use your friendly system packages and using a handy tool called Checkinstall to create a deb or rpm. I’m going to use as an example the Eventlet library. This is available in PyPi and one of it’s dependencies (Greenlets) provides a C extension. The same approach would work for an entire Python web application too. I’m as ever using the apt package management tool but this should work with yum as well.

The first step is to build the package on a build machine. This should be a machine or virtual machine running the same operating system as your production web servers. You might build these packages manually or as part of a continuous integration system. On this machine we’ll need the compilers and development tools:

sudo apt-get install build-essential python-dev python-setuptools checkinstall
sudo easy_install virtualenv

We’ll also create a virtualenv into which we’ll be installing our packages:

sudo virtualenv --no-site-packages /usr/local/environment
source /usr/local/environment/bin/activate

Now, instead of just calling easy_install to install the package, we prefix it with checkinstall.

sudo checkinstall /usr/local/environment/bin/easy_install eventlet

This will prompt for various meta data about the package you want to create, including the name and version of the package. If you’re using this method in the real world you’ll want to decide on a versioning and naming scheme for your packages to avoid clashes with system provided packages. You can also set many of these options from the command line rather than having to manually fill them in each time.

Once everything has been filled in successfully this should run through, installing eventlet and greenlets and eventually creating a deb or rpm package depending on what platform you’re running on. You should see something like:

Done. The new package has been installed and saved to

 /home/vagrant/eventlet-gareth_20110129-1_i386.deb

 You can remove it from your system anytime using: 

      dpkg -r eventlet-gareth

Now lets grab that package and take it to one of our front end web servers via a controlled deployment process. That front end web server needs the virtualenv creating but nothing else. So:

sudo apt-get install python-virtualenv
sudo virtualenv --no-site-packages /usr/local/environment

(Now you might be thinking that installing the python-virtualenv package in this way breaks rule 1 above. And you’d be right in most cases, but I’m guessing Michael’s systems team have a local package repo for authorised packages, or alternatively you could download the package to the build machine and push it to the production environment.)

Now install the package we created earlier.

sudo dpkg -i eventlet-gareth_20110129-1_i386.deb

That should throw all the required files into the virtualenv environment we created. No compilers. No calls to internal or external systems. Just move some precompiled binaries and text files to predefined places on disk.

I used a PyPi package as an example. Checkinstall could have been pointed at a custom build file written especially for your own application, one that moves files and folders to where they are needed. Say something that looks like this:

#!/bin/sh
cp /home/stage/myapplication /var/www/apps/

The running checkinstall against that (or a more complex build file using capistrano or ant or fabric) you can create a package containing your application code and install it into the specified place.