|Docker Solved a Key Problem|
Ever since we started Iron.io, we’ve been trying to solve a problem: how to keep our IronWorker containers up to date with newer language runtimes and Linux packages. For the past two years, IronWorker has been using the same, unchanged run-time environment. That is until a couple of weeks ago when we released custom language environments.
Since the inception of our service, we have been using a single container that contained a set of language environments and binary packages – Ruby, Python, PHP, Java, .NET, and the other languages we support as well as code libraries such as ImageMagick, SoX, and others.
This container (and the strategy behind it) was showing signs of aging with things like Ruby 1.9.1, Node 0.8, Mono 2, and other older language versions in the default stack. As time went on, this problem obviously got worse as people started using newer things and then were forced to change their worker code to work with older versions of their languages.
Limited to a Single LXC Container…
IronWorker uses LXC containers to isolate resources and provide secure run-time task environments. LXC works great as a run-time component but was falling short when it came to creating and loading environments within the runners we use to process tasks. We were at an impasse when it came to creating the runtime environments. On the one hand, we couldn’t just update versions in the existing container or else we’d risk breaking a fair number of the million-plus tasks that run each day. (We tried that once way back at the onset of the service and it wasn’t pretty.)
We also couldn’t keep different LXC containers around with various language versions as they contained full copies of the operating system and libraries (which means they would clock in at ~2 GB per image). This might actually work fine in a typical PaaS environment like Heroku where the processes run indefinitely and you could just get the right container before starting up the process. In this type of situation, you could even have large custom images for each customer if you wanted without much worry, but IronWorker is a much different animal.
IronWorker is a large shared multi-tenant task processing system where users queue up jobs and those jobs run across thousands of processors. It can be used to offload tasks from the main response loop to run in the background, continually process transactions and event streams, run scheduled jobs, or perform concurrent processing across a large number of cores. The benefit is users get on-demand processing and very large concurrency without lifting a finger.
Under the hood, the service works by taking a task from a set of queues, installing a run-time environment within a particular VM, downloading the task code, and then running the process. The nature of the service means that all machines are used by all customers, all the time. We don’t devote a machine to particular applications or accounts for an extended period of time. The jobs are typically short lived, some running for just a few seconds or minutes and with a maximum timeout of 60 minutes.
LXC did the job to a point but we kept asking ourselves, how can we update or add to our existing container while keeping things backwards compatible and not using an insane amount of disk space. Our options seemed pretty limited and so we kept putting off a decision.
… And Then Came Docker
We had heard about Docker over a year ago. We help organize the GoSF meetup group and Solomon Hykes, the creator of Docker, came to a meetup in March 2013 and gave a demo of his new project Docker, which happens to be written in Go. In fact, he released it to the public that day so it was the first time anyone had really seen it.
The demo was great and the 100+ developers in the audience were impressed with what he and his team had built. (And in the same stroke, as evidenced by one of his comment threads, Solomon started a new development methodology called Shame Driven Development.)
Alas, it was too early back then – we’re talking Day 1 early – so the project wasn’t exactly production ready but it did get the juices flowing.
|Solomon Hykes and Travis Reeder hacking
at the OpenStack Summit in 2013.
A month later, I met up with Solomon at the Openstack Summit in Portland for some hack sessions to see how we could use Docker to solve our problem. (I think I only went to one session while I was there, instead spending most of the time hacking with Solomon and other developers).
I started playing with Docker and Solomon helped me wrap my head around what it can do and how it worked. You could tell right off it was not only a cool project but it also was addressing a difficult problem in a well-designed way. It didn’t hurt, from my point of view at least, that it was new, written in Go, and didn’t have a huge amount of technical debt.
Research and Development Phase
Prior to Docker, we had tried to play with different package managers including spending some time with Nix. Nix is a great project and it has a lot of good parts to it, but unfortunately, it wasn’t quite what we needed.
Nix does support atomic upgrades and rollbacks and has a declarative approach to system configuration. Unfortunately it was hard to support the scripts for the different software and code libraries we’re using in our images, and also it was hard to add some custom packages and software. The effort to integrate it appeared to be more of a patch or stop-gap approach for our current system as opposed to something new. We were looking for something different that could come closer to meeting our requirements.
At the onset, those requirements were:
- Provide different versions of same languages (i.e. ruby 1.9 and ruby 2.1)
- Have a safe way to update one part of system without breaking other parts (like update only python libs and don’t touch ruby libs)
- Make use of a declarative approach to system configuration (simple scripts that describe what should be inside an image)
- Create an easy way to update and roll-out updates
As we got further along, it became apparent there were a few other advantages to using Docker that we hadn’t for foreseen. These included:
- Building separate and isolated environments for each runtime/language
- Obtaining support for CoW filesystem (which translates into a more secure and efficient image management approach)
- Having a reliable way to switch between different runtimes on the fly
Working with Docker
As far as working with Docker, it wasn’t difficult to integrate it in since we were already using LXC. (Docker complements LXC with high-level API which operates at the process level. See the StackOverflow piece referenced below.)
Once we migrated our existing shell scripts to Dockerfiles and created images, all we had to do to change from using LXC directly was to use ‘docker run’ (instead of ‘lxc-execute’) and specify the image ID required for each job.
Command to run an LXC image:
> lxc-execute -n VM_NAME -f CONFIG_FILE COMMAND
Command to run a Docker image:
> docker run -i -name=VM_NAME worker:STACK_NAME COMMAND
We should note that we depart a bit from the recommended approaches for creating and installing containers. The standard approaches are to either build images at runtime using Dockerfiles or store them in public or private repos in the cloud. Instead, we build images and then make snapshots of them and store in an EBS volume attached to our system. This is because a worker system needs to set up environments extremely quickly. Creating them at runtime is not an option nor even is downloading them from external storage.
Base Images Plus Diffs
Using Docker also solved our space problem because each image is just a diff from our base image. Which means we can have one base image containing the OS and the Linux libraries we use across all images, and then a bunch of images built off that one. The size difference is only what was added on top of the base image.
For instance, if we install Ruby, only the files that were installed with Ruby are contained in the new image. This may sound confusing, but think of it as a Git repository containing all the files on your computer where the base image is the master branch and all the other images are different branches off the base. This ability to include diffs and build on top of existing containers is going to pay benefits in the future as it will let us continually roll out new versions as well as add code libraries, binary packages, and more to address particular use cases and solution sets.
A Few Issues
We’ve had a few issues in using Docker to create the new language environments and roll them out but nothing major.
- We had some difficulties related to removing containers after a task has run. The cleanup process was failing on occasion but we’ve done a workaround that addresses the issue relatively cleanly.
- While setting up some software, we found that Docker doesn’t properly emulate some low-level functions like fuse. As a result, we had to do some magic to get the Java image to work correctly.
That’s it. As far as any requests we have for the Docker team, they’re mostly around a couple bugfixes. And as for new features, we’re pretty set for now. (We haven’t come up with anything so far that’s lacking as there’s a pretty rich set of features already.)
A Sidebar on LXC, Containers, and Docker
LXC (LinuX Container) is an operating system–level virtualization method that provides a secure way to isolate one or more processes from others processes running a single Linux system. By using containers, resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory, and I/O. As a result, applications, workers, and other processes can be set up to run as multiple lightweight isolated Linux instances on a single host.
Docker is built on top of LXC, enabling image management and deployment services. Here’s an article in StackOverflow from Solomon on the differences between and compatibility of LXC and Docker:
If you take a look at Docker’s features, most of them are already provided by LXC. So what does Docker add? Why would I use Docker over plain LXC?
Docker is not a replacement for lxc. “lxc” refers to capabilities of the linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations.
On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities.
Docker In Production
|Docker Powers IronWorker’s
Custom Language Stacks
We’re now running Docker in production within IronWorker. You can currently choose one of 10 different “stacks” (containers) for your workers by setting the “stack” parameter when uploading. It’s a pretty advanced capability when you think about it – i.e. being able to specify a language version for a short-running task across any number of cores.
Using Docker for image management allows us to update images without any fear of breaking other parts of system. In other words, we can update the ruby1.9 image without touching the ruby2.1 image. (Maintaining consistency is paramount in any large scale worker system especially when you support a large set of languages.)
We also have a more automated process in place now for updating images using Dockerfiles that lets us roll out changes on a very predictable schedule. In addition, we have the ability to provide custom images. These can be user-defined around specific language versions or can even include particular language frameworks and code libraries.
The decision to use Docker in production was not all that risky of a move. While it may have been early a year ago, it is a solid product now. The fact that it’s new is actually a benefit in our minds. It has a pretty clean set of features and is built for large scale and dynamic cloud environments like ours.
And as far as advice for others, we suggest making use of the “ready-to-use” Dockerfiles, scripts, and public images. There’s a lot there to start with. In fact, we’ll likely be making our Dockerfiles and images public which means that people will have an easy way to run their workers locally plus we’ll allow people to submit pull requests to improve them.
Processing tens of thousand of compute hours and millions of tasks every day in almost every language is not an easy feat. Docker has allowed us to address some serious pain points without a tremendous amount of effort. It has increased our ability to innovate as well as build new capabilities under the hood of IronWorker. But just as importantly, it allows us to maintain and even surpass the service guarantees we’ve worked hard to establish.
Docker has a great future and we’re glad we made the decision to include it within our stack.
|Docker in Production|
UPDATE: We posted another article on our experiences with Docker on What We’ve Learned Launching Over 300 Million Containers. In it we address some of the challenges we faced in running a Docker-based infrastructure in production, how we overcame them, and why it was worth it.
For More Information
For more insights on the technologies we use within our IT stack, you can watch this space or sign up for our newsletter. Also, feel free to email us or ping us on twitter if you have any questions or want to share insights. For more information on Docker, please visit www.docker.io.
To Make Use of IronWorker Custom Environments
Sign up for a free account at Iron.io. You can use the base environment or include the ‘stack’ parameter in your .worker config file along with the stack name to use one of the custom language environments.
To learn more about what you can do with a worker system, check out this article on top uses of IronWorker.
About the Author
Travis Reeder is co-founder and CTO of Iron.io, heading up the architecture and engineering efforts. He is a systems architect and hands-on technologist with 15 years of experience developing high-traffic web applications including 5+ years building elastic services on virtual infrastructures.
He is an expert in Go and is a leading speaker, writer, and proponent of the language. He is an organizer of GoSF (1000+ members) and author of two posts on Go: How We Went from 30 Servers to 2: Go and Go After 2 Years in Production.
Roman Kononov also contributed to this article and is a key part of integrating Docker into the Iron.io technology stack. He’s a senior developer and core member of Iron.io’s infrastructure team.