Docker – One Year On
Introduction
It’s just over a year since I first heard of Docker, and I’ve been using it avidly one way or another ever since. A year seems like a good time to look back over what it’s done for us at $corp compared to what I’d hoped.
Docker is such a flexible technology it’s fascinating to see how different people view it and run with it. Some see it as “just a packaging tool” (as if that were not a significant thing in itself), others as a means to delivering on the microservices dream, still others as a way of saving CPU cycles at scale.
Our use case was very much about improving predictability in a sprawling decades-old codebase for the development, QA, support and testing cycles. As far as we’re concerned, it’s definitely delivered, but we’ve had to go off-piste a bit.
Main Achievements
The tl;dr is that using Docker has saved us significant amounts of time, and in addition enabled us to do things considered previously impractical. There are too many to list (available on request!) but here are some highlights:
- Our biggest dev team (~50 devs) has switched from a “shared dev server” development paradigm, to dev’ing on local machines from Docker images built daily. Others are now following.
- QA processes have improved significantly – for example, automated endpoint testing was a trivial addition to the daily-built dev env, complete with emails sent to the testers with logs etc..
- We have reduced the reliance and load on a centralised IT team always torn between stability and responsiveness (“why can’t we just upgrade python on the shared servers!?”)
- CI is now using a centralised reference image as a Jenkins slave, again reducing dependence on IT for changes. Docker’s image layering provides a neat saving in on-premise disk space also.
- Multi-node regression testing environments can be easily deployed on an engineer’s laptop (we use Skydock for this).
- Escrow build auditing processes which were previously onerous now come “for free”, as we phoenix-deploy dev environments.
Central to all these has been using ShutIt to enable us to encapsulate our complex build and dev needs into a single point of reference. Uptake of ShutIt has been far better than with traditional CM tools like Puppet or Ansible, mainly due to speed-to-market rather than maturity of the tool. Relatively little training is needed to slot your changes into the ShutIt builds and regression test it; everyone can pitch in quickly.
Lessons Learned
Getting traction for a new technology with software development house like ours (500+ engineers in various locations etc) is non-trivial. Here are some lessons learned:
- Find a few motivated people and give them space to make it work
- Focus them on a single well-defined problem (in our case: canned dev environments)
- This problem should have a clear benefit to the business
In terms of the Docker systems, these were some of the things we learned as we went:
- Dockerfiles did not scale with our image need. This may be because we’re “doing it wrong” (ie our software architecture is too monolithic), but Dockerfiles were plain impractical for us, so we effectively dropped them in favour of ShutIt
- Having pre-built vanilla base images were very useful in getting traction. Being able to hand people a useful and up-to-date image to start working on was seen as a win. A monthly cadence for the base image worked for us, while dailies were essential for the development team.
- Building on this known base has facilitated several projects that otherwise may have taken longer to get going.
- The Docker community is very strong and very useful – without Jerome Petazzioni‘s blogs and help on Docker mailing lists we’d have been stymied a few times.
- Running your containers within a VM can help reduce nerves elsewhere in your business about security beyond isolation, or managing shared compute resource (until that space matures in the Docker ecosystem).
- Managing your own registry so that people can play with it is a good idea if you want privacy, but make sure you have a plan for when the disk space usage blows up!
Future
As Docker usage grows throughout or organisation there are several things we have our eyes on over the horizon.
- CoreOS looks like a very promising technology for managing Docker resources, and we’re playing with this very informally
- Enterprise support is something we’ve explored, but since we’ve not gone near live with this yet we’ve not pursued it. I’d be interested to know what people’s experience with it is though
Over to You
Are your experiences with getting Docker out there similar, or completely different? I’d be delighted to hear from you (@ianmiell / ian.miell@gmail.com)
One thought on “Docker – One Year On”