The coming of the Kubernetes distributions

Very few people today start using Linux by downloading the linux kernel and starting from scratch. Most people start with a Linux distribution; for instance Debian, Ubuntu or CentOS. These distributions provide some opinions, some central infrastructure, a brand, strong versioning for the entire ecosystem and a bunch of other things. I posit that we'll see the same pattern emerge with Kubernetes.

What even is Kubernetes?

I've seen Kubernetes described as all of the following:

  • An operating system for your datacenter
  • The distributed systems toolkit
  • The Linux kernel for distributed systems

I think all of these descriptions point to the developers intent that Kubernetes is something to build upon, rather than a simple out-of-the-box experience. It's predominantly about building agreement on the primitives/APIs of distributed systems.

A name for a thing

I've not seen much discussion of this in general yet, I think because it's early days and many of the people looking at Kubernetes today are either developers or early adopter types. These people have been "downloading the kernel and starting from scratch", even until recently most likely running from source downloaded directly from GitHub. If the Kubernetes ecosystem is to grow then that's not how more mainstream IT will adopt Kubernetes.

The reason for discussing this now is that I think a name is useful. That way we can talk about Kubernetes (singular, the software) separate from distrubutions of Kubernetes (many of them, from different vendors and communities). I'd be happy to see a different name, but I think distribution probably fits best.

Any evidence?

Absolutely. A range of software vendors are providing what I'm calling Kubernetes distributions. Here is a sample, I'm sure there are and will be more. I'm also sure over time some will disappear or maintain only a niche audience.

  • OpenShift from Red Hat
  • Tectonic from CoreOS
  • Kismatic from Apprenda
  • Rancher
  • Canonical Distribution of Kubernetes
  • GKE from Google
  • Azure Container Service from Microsoft
  • Photon Platform from VMware
  • Navops from Univa

Note that Canonical are already using the term distribution in the name. I've seen it used in passing in CoreOS, OpenShift and Apprenda press materials too.

What can we expect from Kubernetes distributions?

Running with the analogy that Kubernetes is "an operating system for your datacenter" and that we'll have a range of competing Kubernetes distributions, what else can we expect over the next few years?

Package repositories (aka. app stores)

One of the things provided by the traditional Linux distributions has been a central package repository. Most of the packages you're installing from apt or yum are coming from that currated set of available packages. Not to mention community efforts like EPEL. We already have two package concepts within the Kubernetes ecosystem - container images (often from Docker Hub today, or from internal repositories) and Charts, part of the Helm package management tool (now a CNCF project).

In the short term expect the shared public Charts repository and Docker Hub to dominate. But over time different vendors will launch there own repositories. Partly this will be about building a trusted ecosystem, partly about limiting permutations for support and testing, and partly about control. The prize here is to be "the enterprise app store" and no vendor in this space isn't going to at least try to own that as part of their platform.

Kubernetes standards and compliance

In an environment with many distributors of core software, it's common for people to emphasise portability. As vendors extend their distribution (to provide higher level, but potentially proprietary features) this can become muddier. Some level of certification is often the answer. See CloudFoundry or OpenStack for recent examples. Kubernetes is already part of the CNCF, part of the Linux Foundation. I'd expect to see the works standards and certification eventually float around, but my guess is not in the short term.

A fight over who is the most open

Much of the container conversation recently has centered around a weaponisation of open. I think as the different distributions try and take the community with them at the same time as trying to scale sales this will continue. This will be an irritation and is probably best avoided.

Pressure for AWS to offer Kubernetes as a service

I would presume AWS has a very good idea of how many people are actually using Kubernetes on it's platform. I think as that grows, and as other vendors efforts mature, they will come under pressure to offer the Kubernetes API as a service. I'm still split on whether that will actually happen but that's a longer blog post about economics.

Differentiating features

Ultimately vendors will try and differentiate themselves in this new market. To begin with the majority of business will be targetting the container-curious and mainly talking up the benefits of containers and Kubernetes. But some potentialy customers are going to insist on comparing Kubernetes distributions and winning there is going to be about clear differentiation. Do you want to be the budget offering or the provider with the unique selling point?

Interesting questions

An observation at the moment is that all the current Kubernetes distributions I'm aware of are vendor-owned. Whether Open Source or not, they are driven by a single vendor (CoreOS, Red Hat, Apprenda, etc.) It's interesting to see whether, in the current climate, we see a genuinely free and open source Kubernetes distribution emerge, similar to the role Debian plays in the Linux distribution world.

Unikernels and The End of the General Purpose Operating System

The previous post went into why I think the days of the general purpose operating system (for servers) are numbered. But one interesting area I didn't comment on (but did talk about in the talk of the same name) was Unikernels.

It's all about cost

One of the topics I didn't really touch on in discussing the end of the generally purpose operating system was cost. Historically, maintaining a general purpose operating system has been a costly endeavour, something only the largest companies or communities could sustain by themselves. Think Red Hat, Oracle, Microsoft, Sun, IBM, Debian, etc. The result of that is the assumption when building software that you should target one or more of a small number of operating systems. In doing so you're ceding some ground, and likely some revenue, to another vendor. You're also stuck with any underlying limitations of that OS as well as its release cadence. And invariably you're also stuck with the multiplying support cost of supporting your software on multiple versions of that OS over time.

I would posit that up until relatively recently the cost of that support burden was hugely outweighted by the cost of maintaining an actual operating system. But that's now changing, as I outlined in the previous post. Now a small or medium sized software company (be it CoreOS, Rancher, Docker, Pivotal, etc.) can build and maintain it's own operating system as well. This is very much about the rising level of abstraction - all of the above leverage the huge efforts that go into the Linux kernel and into other projects like systemd (CoreOS) or Alpine (Docker's Moby) for instance.

Enter Unikernels

But where do Unikernels fit into this narrative? I'd argue that they represent the fulfilment of this democratization. If building and maintaining a traditional OS is only possible for the largest of companies, and building and maintaining a more special-purpose OS (say for running containers, or a storage device) is cost-effective for medium sized softare companies, then Unikernels will allow anyone to build their own single-purpose operating systems.

There are other technical reasons for (and against) Unikernels as an approach but most focus on the technical. I think the economic side is worth some consideration too. And not just the typical development and support costs, but the ability to own the end-to-end unit of software has lots of benefits, and Unikernels may make those benefits available to everyone, including small organisations and individuals.

The End of the General Purpose Operating System

As interesting chat on Twitter today reminded me that not everyone is probably aware that we're seeing a concerted attempt to dislodge the general purpose operating system from our servers.

I gave a talk about some of this nearly two years ago and I though a blog post looking at what I got right, what I got wrong and what's actually happening would be of interest to folks. The talk was written only a few months after I joined Puppet. With a bunch more time working for a software vendor there are some bits I missed in my original discussion.

What do you mean by general purpose and by end?

First up, a bit of clarification. By general purpose OS I'm referring to what most people use for server workloads today - be it RHEL or variants like CentOS or Fedora, or Debian and derivatives like Ubuntu. We'll include Arch, the various BSD and opensolaris flavours and Windows too. By end I don't literally mean they go away or stop being useful. My hypothosis is that, slowly to begin with then more quickly, they cease to be the default we reach for when launching new services.

The hypervisor of containers

The first part of the talk included a discussion of what I'd referred to as the hypervisor of containers, what today would more likely be referred to as a CaaS, or containers as a service. I even speculated that VMWare would have to ship something in this space (See vSphere Integrated Containers and the work on Photon OS) and that counting out OpenShift would be premature (OpenShift 3 shipped predominantly as a Kubernetes distribution). I'll come back to why this is a threat to your beloved Debian servers shortly.

The race to PID1

For anyone who has run Docker you'll likely have wrestled with the question of where does the role of the host process supervisor (probably systemd) start and the container process supervisor (the Docker engine) end? Do you have to interact directly with both of them?

Now imagine if all of the software on your servers was run in containers. Why do I need two process supervisors now with 100% overlap? The obvious answer is you don't, which is why the fight between Docker and systemd is inevitable. Note that this isn't specific to Docker either. In-scope for cri-o is Container process lifecycle management.

Containers as the unit of software

Hidden behind my hypothosis, which mainly went unsaid, was that containers are becoming the unit of software. By which I mean the software we build or buy will increasingly be distributed as containers and run as containers. The container will carry with it enough metadata for the runtime to determine what resources are required to run it.

The number of simplying assumption that come from this shared contract should not be underestimated. At least at the host level you're likely to need lots of near-identical hosts, all simply advertising their capabilities to the container scheduler.

Operating system as implementation detail

What we're witnessing in the market is the development of vertically integrated stacks.

  • Docker for Mac/Windows/AWS/Azure ships with it's own operating system, an Alpine Linux derivative nicknamed Moby, which is not intended for direct management by end users.
  • Tectonic from CoreOS is a Kubernetes distribution which runs atop a cluster of managed CoreOS hosts. Most of the operating system is managed with frequent atomic rolling updates.
  • OpenShift Enterprise from RedHat is another Kubernetes derivative, this time running atop Atomic host.
  • Pivotal CloudFoundry ships with the IaaS, host OS, kernel, file system, container OS all tested together

In all of these cases the operating system is an implementation detail of the higher level software. It's not intended to be directly managed, or at least managed to the same degree as the general purpose OS you're running today.

This is how the end comes for the majority of your general purpose operating system running servers. The machines running containers will be running something more single purpose, and more and more of the software you're running will be running in containers.

The reason why you'll do this, rather than compose everything yourself, is compatability. Whether it's kernel versions, file system drivers, operating system variants or a hundred variations that make your OS build different from mine. Building and testing software that runs everywhere is a sisyphean task. Their is also the commercial angle at play here, and the advantage of being able to support a single validated product to everyone.

Implications

There are lots of implications to this move, and it's going to be interesting to see how it plays out with both early adopters and enterprise customers alike.

  • What does this mean for corporate operating system policies?
  • How do standard agent-based monitoring systems work in a world of closed vertical stacks?
  • Will we see this pattern for other types of service in the AWS Marketplace, where instance launched are inaccessible but automatically updating?
  • How does such fast moving software work in environments with rigid change control processes or audit requirements?
  • Many large organisations will end up running more than one of these types of system, how best to manage such heterogenous environments?
  • Will we see push back from some parties? In particular the open source community who may see this mainly serving the needs of vendors?
  • Does the end of the general purpose OS lead to greater specialism amongst systems administrators?

I'd love to chat about any of this with other folks who have given it some thought. It's interesting watching grand changes play out across the industry and picking up on patterns that are likely obvious in hindsight. And if you like this sort of thing let me know and I'll try and find time for more speculation.

InfraKit Hello World

Docker just shipped InfraKit a few days ago at LinuxCon and, while at the Docker Distributed Systems Summit, I wanted to see if I could get a hello world example up and running. The documentation is lacking at the moment, epecially around how to tie the different components like instances and flavors together.

The following example isn't going to do anything particularly useful, but it's hopefully simple enough to help anyone else trying to get started. I'm assuming you've checked out and built the binaries as described in the README.

First create a directory. We're going to be using InfraKit to manage local files in that directory as part of the demo.

mkdir test

Now create an InfraKit configuration file. We're going to use the file instance plugin to manage files in out directory. This means everything works on the local machine, rather than trying to launch real infrastructure in AWS or similar. InfraKit also requires a flavor plugin. I'm using vanilla here just to meet the requirement for a flavor plugin, but it's not going to actually do anything in this demo. It might be useful to write a noop flavor plugin or similar.

cat garethr.json
{
    "ID": "garethr",
    "Properties": {
        "Instance" : {
            "Plugin": "instance-file",
            "Properties": {
            }
        },
        "Flavor" : {
            "Plugin": "flavor-vanilla",
            "Properties": {
                "Size": 1
            }
        }
    }
}

InfraKit is based on running separate plugins. Each plugin runs as a separate process and provides a filesystem socket in /run/infrakit/plugins. First start up the file plugin:

$ ./infrakit/file --dir=./test
INFO[0000] Starting plugin
INFO[0000] Listening on: unix:///run/infrakit/plugins/instance-file.sock
INFO[0000] listener protocol= unix addr= /run/infrakit/plugins/instance-file.sock err= <nil>

Next, in a separate terminal run the vanilla plugin:

$ ./infrakit/vanilla
INFO[0000] Starting plugin
INFO[0000] Listening on: unix:///run/infrakit/plugins/flavor-vanilla.sock
INFO[0000] listener protocol= unix addr= /run/infrakit/plugins/flavor-vanilla.sock err= <nil>

An finally run the group plugin. I'm passing --log=5 to enable more verbose outout so it's easier to see what's going on with the group.

$ ./infrakit/group --log=5
INFO[0000] Starting discovery
DEBU[0000] Opening: /run/infrakit/plugins
DEBU[0000] Discovered plugin at unix:///run/infrakit/plugins/instance-file.sock
INFO[0000] Starting plugin
INFO[0000] Starting
INFO[0000] Listening on: unix:///run/infrakit/plugins/group.sock
INFO[0000] listener protocol= unix addr= /run/infrakit/plugins/group.sock err= <nil>

With that all setup we can create a group based on our configuration file from above.

$ ./infrakit/cli group --name group watch garethr.json
watching garethr

Have a look in the test directory. You should see a single file has been created.

$ ls test
instance-1475833380

Let's delete that file and see what happens:

rm test/*

Hopefully InfraKit will spot the instance (a file in this case) no longer exists and recreate it. You should see something like the following in the logs:

INFO[0612] Created instance instance-1475833820 with tags map[infrakit.config_sha:B2MsacXz8V_ztsjAzu3tu3zivlw= infrakit.group:garethr]

This is obviously a less-than-useful example but hopefully provides a good hello world example for anyone trying to run InfraKit in it's current early stage.

Everyone is Not a Software Company

The Everyone is a Software Company meme has been around for a number of years, but it feels increasingly hard to get away from recently. That prompted this post.

But what do we mean by Software Company?

To be software company you're going to need to employee software engineers and other professionals. Applying that logic to a large number of companies at once, and looking at how existing software companies are setup, we find a few large problems.

Google as an example

In my talk at Velocity, entitled The Two Sides of Google Infrastructure for Everyone Else I argued both for and against the idea of wholesale adoption of Google-like software and development/operations practices. Even though they derive the lions share of revenue from advertising it's easy to argue that Google are a software company. But what does that look like? What makes Google a software company?

From the Google Annual Report 2015

61,814 full-time employees: 23,336 in research and development, 19,082 in sales and marketing, 10,944 in operations, and 8,452 in general and administrative functions

So, roughly 50% of Google is involved in building or running software. Glassdoor says salaries for engineers at Google average about $126,000-$162,000.

The US Bureau of Labor Statistics says that in 2014 the number of computer programming jobs in the US was 1,114,000, with median pay in 2015 of $100,690 a year. The total number of jobs in the US is about 143 million, with the average wages at $44,569.20 according to the Social Security Administration.

The Google Annual Report also states:

Competition for qualified personnel in our industry is intense, particularly for software engineers, computer scientists, and other technical staff

So, quick summary:

  • Software engineers are expensive relative to others employees
  • Demand for the best engineers means even higher wages
  • Proportionally there aren't many software developers
  • There isn't a large surplus of unemployed software engineers

Now the data above is mainly from US sources, although the Google data is from an international company with offices around the world. My experience says this is likely similar in Europe. Looking into data for India and China would be super interesting I'd wager.

Problems

One obvious problem is short-term supply and demand. Everyone wants experienced software folks for their transformation effort. But the more organisations that buy into the everyone is a software company story the greater the demand for a finite supply of people. For most that means you'll to able to find less people that you want because of competition and afford even less people because all that competition pushes up salaries.

I've seen that firsthand while working for the UK Government. People occasionally complained that Government was hampering commercial organisations growth by employing lots of developers and operations people in London.

You're also immediately in competition for software professionals with existing software companies. Given the high salaries, most of those employers already have developer friendly working environments and established hiring practices suited to luring developers to work for them. This sort of special case is hard for large companies without an existing empowered developer organisation. I saw a lot of that at the Government as well.

But the real macro problems are much more interesting. Even if you think 50% is a high mark for the ratio of software folk to others, you probably agree you need a lot more than you have today. And those developers just don't exist today to allow everyone to be a software company. Nor would I argue is education in the near term producing enough skilled people to fill that gap tomorrow. So, what happens?

  • Does everyone sort-of become a software company but not quite?
  • Do most organisations struggle to hire and maintain a software team and see the endeavour fail?
  • Do increasing numbers of developers end up working for a small number of larger and larger software companies?
  • Does outsourcing bounceback, adapt and demonstrate innovation and transformation qualities to go along with the scale?
  • Countries like India or China are able to produce enough software engineers at scale to allow there companies to act on everyone becoming a software company?
  • We see clear winners and losers, ie. companies which become software companies and accelarate away from those that don't?

Personally I think to take advantage of the idea behind the meme we're going to need order of magnitude more efficient approaches to software delivery. What that looks like is the most interesting question of all.

Caveats

The above is not a detailed analysis, and undoutedly has a few holes. It also doesn't overly question the advantage of being a software company, or really question what we actually mean by everyone. But I think the central point holds: Everyone is NOT a software company, nor will everyone be a software company any time soon, unless we come up with a fundamentally better approach to service delivery.