Information Security Reading List

I read quite a bit (probably a book a week or so) and one of the topics I've been reading on for a while is information security. In a recent conversation someone asked for some book suggestions, so I thought I'd write that up in a blog post rather than an email.

Most of this list isn't particularly technical. It's not a developers list of software engineering tomes. If you're a developer or operator then I'd recommend reading some of the more policy or journalistic pieces as well for context. And if you're just interested in the topic but nor particularly technical I'd skip the security engineering suggestions.

Note that I make no claims about this being a particularly balanced list, it's biased towards what I find interesting to read. Hopefully you'll find it interesting too.

Journalism

Understanding why Information Security is important tends to require some context. The following books provide that, with detailed real-world stories of criminal and government activities.

  • The Dark Net - Jamie Bartlett - an excellent personal tale of investigating the hidden side of the internet.
  • Spam Nation - Brian Krebs - everything you wanted to know about how and why Spam works.
  • Countdown to Zero Day - Kim Zetter - a detailed and fast paced description of the Stuxnet attack, and it's implications.
  • Future Crimes - Marc Goodman - a focus on the criminal possibilities of the modern internet and the internet of things.
  • Worm - Mark Bowden - similar to the excellent tale of Stuxnet above, this is the story of Conficker and how it was discovered

Policy and context

These books are focused more on government policy and nation state threats, and the debate about the rules of war and the internet.

  • Cyber War - Richard Clarke - probably the best description of what cyber war is and isn't, and some of the geopolitical problems emerging.
  • Cyber War Will Not Take Place - Thomas Rid - a good counter to the above book, with lots more detailed discussion of policy and definition.
  • Inside Cyber Warfare - Jeffrey Carr - really just a run through of current threats, especially organised crime.

Security engineering

  • Security Engineering - Ross Anderson - highly technical and quite epic, but definitely the best security engineering book around.
  • Threat Modelling - Adam Shostack - details descriptions of how and why to conduct threat moddelling, with lots of examples.
  • Data Driven Security - Jay Jacobs and Bob Rudis - nice examples, including code samples, of applying data and statistics tools and practices to security problems.
  • Cloud Security and Privacy - Tim Mather, Subra Kumaraswamy, Shahed Latif - a good book to read for anyone working in AWS, Azure or similar. Good discussion of concerns and compliance approaches in third party environments.
  • The Tangled Web - Michal Zalewski - everything you ever wanted to know about the browser security model
  • Silence on the Wire - Michal Zalewski - described as a field guide to passive reconnaissance and indirect attacks. Good for starting to think about non-obvious security threats

On my reading list

I've not read these books yet so can't recommend them as such, but they both look good additions to the list above.

  • Data and Goliath - Bruce Schneier - a look at the large scale data collection programmes of governments and their implications for everyone.
  • Black Code - Ronald J. Deibert - the story of the Citizen Lab and it's front line cyber researchers

Acceptance testing MirageOS installs

I'm pretty interested in MirageOS at the moment. Partly because I find the idea behind unikernels interesting and partly because I keep bumping into the nice folks OCaml Labs in Cambridge.

In order to write and build your MirageOS unikernel application you need an OCaml development environment. Although this is documented I wanted something a little more repeatable. I also found and reported a few bugs in the documentation which got me thinking about acceptance testing. I'm not (yet) an OCaml programmer, but infrastructure automation and testing I can do.

Into Puppet

I started out writing a Puppet module to install and manage everything, which is now available on GitHub and on the Forge.

This lets you do something like the following, and have a fully working MirageOS setup on Ubuntu 12.04 or 14.04.

class { 'mirageos':
  user      => 'vagrant',
  opam_root => '/home/vagrant/.opam',
}

Given time, inclination or pull requests I'll add support for other operating systems in the future.

But how do you know it works?

The module has a small unit test suite, but it's nice to know test the actual running of Puppet and installation of the software. For this I've used Test Kitchen and ServerSpec. This allows for spinning up 2 virtual machines (one for each supported operating system), applying the Puppet manifest and then making some assertions:

The above is simply checking whether certain packages are installed, the PPA is setup correctly and whether mirage and opam can be executed cleanly.

Can it produce a working unikernel?

The above tells us whether the installation worked, but not whether the resulting software allows us to build MirageOS unikernels. For this I used Bats running in the same Test Kitchen setup.

The above configures and builds a simple HTTP server unikernel, and then checks that when run it returns the expected response on the correct port.

Conclusion

I like the separation of concerns above. I can use the Puppet code without the test code, or even swap the Puppet code out for a shell script if I wanted. I could also run the serverspec tests anywhere I want to check state, which is the reason for separating those tests from the one's building and running a unikernel. Overall the tool chain for ad-hoc infrastructure testing (quick mention of Infrataster too) is really quite powerful and approachable. I'd love to see more software ship with a user-facing test suite for people to verify their installation works.

Automating windows development environments

My job at Puppet Labs has given me an excuse to take a closer look at the advancements in Windows automation, in particular Chocolatey and BoxStarter. The following is very much a work in progress but it's hopefully useful for a few things:

  • If like me you've mainly been doing non-Windows development for a while it's interesting to see what is possible
  • If you're starting out with infrastructure development on Windows the following could be a good starting place
  • if you're an experienced Windows pro then you can let me know of any improvements

All that's needed is to run the following from a CMD or Powershell prompt on a new Windows machine (you can also visit the URL in Internet Explorer if you prefer).

START http://boxstarter.org/package/nr/url?https://gist.githubusercontent.com/garethr/a1838aa68355a0766de4/raw/d92b41ee9dcad68c079d24c64bac7d1d27cf37c7/garethr.ps1

This launches BoxStarter, which executes the following code:

This takes a while as it runs Windows update and repeatedly reboots the machine. But once completed you'll have the listed software installed and configured on a newly up-to-date Windows machine.

Docker, Puppet and shared volumes

During one of the openspace sessions at Devopsdays we talked about docker and configuration management, and one of the things we touched on was using dockers shared volumes support. This is easier to explain with an example.

First, lets create a docker image to run puppet. I'm also installing r10k for managing third party modules.

Docker

FROM ubuntu:trusty

RUN apt-get update -q
RUN apt-get install -qy wget
RUN wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb
RUN dpkg -i puppetlabs-release-trusty.deb
RUN apt-get update

RUN apt-get install -y puppet ruby1.9.3 build-essential git-core
RUN echo "gem: --no-ri --no-rdoc" > ~/.gemrc
RUN gem install r10k

Lets build that and tag it locally. Feel free to use whatever name you like here.

docker build -t garethr/puppet .

Lets now use that image as a base for another image.

FROM garethr/puppet

RUN mkdir /etc/shared
ADD Puppetfile /
RUN r10k puppetfile check
RUN r10k puppetfile install
ADD init.pp /
CMD ["puppet", "apply", "--modulepath=/modules", "/init.pp","--verbose", "--show_diff"]

This image will be used to create containers that we intend to run. Here we're including a Puppetfile (a list of module dependencies) and then running r10k to download those dependencies. Finally we add a simple puppetfile (this would likely be an entire manifests directory in most cases). The final line means that when we run a container based on this image it will run puppet and then exit.

Again lets build the image and tag it.

docker build -t garethr/puppetshared .

Just as a demo, here's a sample Puppetfile which includes the puppetlabs stdlib module.

Puppet

mod 'puppetlabs/stdlib'

And again as an example here's a simple puppet init.pp file. All we're doing is creating a file at a specific location.

file { '/etc/shared/client':
  ensure => directory,
}

file { '/etc/shared/client/apache.conf':
  ensure  => present,
  content => "not a real config file",
}

Fig

Fig is a tool to declare container types in a text file, and then run and manage them from a simple CLI. We could do all this with straigh docker calls too.

master:
  image: garethr/puppetshared
  volumes:
    - /etc/shared:/etc/shared:rw

client:
  image: ubuntu:trusty
  volumes:
    - /etc/shared/client:/etc/:ro
  command: '/bin/sh -c "while true; do echo hello world; sleep 1; done"'

The important part of the above is the volumes lines. What we're doing here is:

  • Sharing the /etc/shared directory on the host with the container called master. The container will be able to write to the host filesystem.
  • Sharing a subdirectory of of /etc/shared with the client container. The client can only read this information.

Note the client container here isn't running Puppet. Here it's just running sleep in a loop to simulate a long running process like your custom application.

Let's run the master. Note that this will run puppet and then exit. But with the above manifest it will create a config file on the host.

fig run master

Then run the client. This won't exit and should just print hello world to stdout.

fig run client

Docker 1.3 adds the handy exec command, which allows for one-off commands to be executed within a running container. Lets use that to see our new config file.

docker exec puppetshared_client_run_1 cat /etc/apache.conf

This should output the contents of the file we created by running the master container.

Why?

This is obviously a very simple example but I think it's interesting for a few reasons.

  • We have completely separated our code (in the container) from the configuration
  • We get to use familiar tools for managing the configuration in a familiar way

It also raises a few problems:

  • The host needs to know what types of container are going to run on it, in order to have the correct configuration. If you're using Puppet module then this is simple enough to solve.
  • The host ends up with all of the configuration for all the containers in one place, you could also do things with encrypting the data and having the relevant keys in one image and not others. Given how if you're on the host you own the container anyway this isn't as odd as it sounds.
  • We're just demonstrating files here, but if we change our manifest and rerun the puppet container then we change the config files. But depending on the application it won't pick that up unless we restart it or create a new container.

Given enough time I may try build a reference implementation using this approach, anyone with ideas about that let me know.

This post was inspired by a conversation with Kelsey and John, thanks guys.

Using Puppet with key/value config stores

I like the central idea behind storing configuration in something like Etcd rather than lots of files on lots of disks, but a few challenges still remain. Things that spring to mind are:

  • Are all your passwords now available to all of your nodes?
  • How do I know when configuration changed and who changed it?

I'll leave the first of those for today (although have a look at Conjur as one approach to this). For the second, I'm quite fond of plain text, pull requests and a well tested deployment pipeline. Before Etcd (or Consul or similar) you would probably have values in Hiera or Data Bags or similar and inject them into files on hosts using your configuration management tool of choice. So lets just do the same with our new-fangled distributed configuration store.

key_value_config { '/foo':
  ensure   => present,
  provider => etcd,
  value    => 'bar',
}

Say you wanted to switch over to using Consul instead? Just switch the provider.

key_value_config { '/foo':
  ensure   => present,
  provider => consul,
  value    => 'bar',
}

You'd probably move all of that out into something like hiera, and then generate the above resources, but you get the idea.

etcd_values:
  foo: bar

The above is implemented in a very simple proof of concept Puppet module. Anyone with any feedback please do let me know.