In previous posts, and even at Cloud Expo, I’ve been pushing the idea that it’s the process that matters, not the technology when it comes to container security. I’ve tried not to make claims that at tied to a specific solution, although I have made a few posts using docker as a basis.
I was recently asked my thoughts on Docker in Production: A History of Failure .
Basically they can be boiled down to “it’s new; if you ride the bleeding edge you might get cut”. Which we all know.
There’s some lessons we can take away from that rant, though. Indeed some of the mistakes and lessons come out in the comments. None of them are security related, but they all have impact on the resiliency of the deployments.
Quickly moving technology may change in incompatible ways
One reason for the popularity of RedHat Enterprise Linux and for Ubuntu LTS in large companies is simply due to the compatibility and stability of the releases. You can stick with RedHat 6 for 7 years and not worry (too much!) about patches and enhancements breaking things. Docker, as a technology, is closer to Fedora than RHEL. It’s evolving and changing and improving very quickly. Each release is getting better… but we don’t have compatibility guarantees.
Tooling doesn’t always exist
Again, this is new technology. You may need to invent and create stuff, especially for you local specific requirements. There’s nothing specific to Docker, here. Systems engineering has always had a “invent stuff” role. That’s the fun and why I work in technology infrastructure teams, rather than doing application development.
Containers aren’t meant to be persistent… but can be!
In an ideal world a container based application should not have persistency and treat storage as a service to be attached. But… as previously written, you can attach persistent storage to your containers. Docker even produces a number of options. You can make a database run inside a docker container, but should you?
Don’t blindly sync repositories
You might think that synchronising an external repo to something internal is a good thing; after all, it reduces dependencies on external services and means your build process can continue even if the remote service is unavailable. Well… yes, but that’s not the whole story. Your internal repo should be curated. Don’t just blindly sync trees. The “npm leftpad” issue, the docker signing key issue… both of these would propagate into an internal server if you blindly sync. You also haven’t added any security or repeatability to your process
Not all tasks are suitable for containers
If your server is running 100% on your application then putting it into a container won’t let you run on that machine. If your application hasn’t been developed for horizontal scalability then spinning up 10 containers won’t give you more capacity.
Conclusion
It may not make sense to run your database server in a docker world; it may not make sense to run your low-latency app in a container (the people I’ve worked with won’t even use VMs; it must be physical with specific CPUs with known cache-coherency properties; there must be no disk I/O etc etc); it may not make sense to run your LDAP infra that maxes out all CPU cores in a container…
Containers should only be one tool in the toolbox; it’s not the be-all solution.
Personally, I like the PaaS approach to containers (eg Cloud Foundry or Apprenda); this can hide a lot of the rawness and can provide a more friendly solution… but at the expense of flexibility. It can encourage the use of 12 factor apps, horizontal scalability and use of attached services.
But if you do want to use Docker, then go into it with your eyes open; be aware of the risks; be aware that the technology is changing quickly.
And I’ll leave you with a “Downfall” parody, in which Hitler talks about using Docker in production.