It started with a set of slides by a friend:
My first thought was to wonder wonder how heartbleed, shellshock, cve-2015-7547 and the like fit into this story. He answered “rebuild the world and redeploy”. Which I felt missed the problem.
You also need a level of control around what goes into containers, who can build containers, where they get deployed.
We have decades of history of knowing that self-run machines are badly patched and badly maintained; if the bug isn’t in the application code then it’s mostly invisible to the developer. Hell, even SAs are bad at patching unless forced (“my server has 1000 days uptime!” is not a good brag).
What has changed? Why will we expect developers to start maintaining their private OS instance (which is, effectively, what a container is) now? Worse, in an environment where everyone and their dog can spin up containers then you may not even know where your vulnerable systems are.
With an opinionated PaaS like Cloud Foundry the developer gets no say; it’s handled under the covers. But if you open up containers without control?
The story I hear (and Adam’s slides seem to go in that direction) is that containers are great for self-service; allow the developers to push whatever the hell they like; they have the ability to fix it, to make sure the code works; they are now masters of their own fate and they can’t break the host.
But nowhere do I hear “security”. Indeed Adam’s slides specifically permit insecurities:
No need to worry about doing it the right way: just throw the
library binaries, framework templates, ancient, buggy,
security-vulnerable versions of Java, whatever in there
higgedly-piggledy!
This is a bad view of security. If you’re running buggy code (“I’m looking at you, Wordpress!“) then your container gets pwned. OK… not the whole host, but there’s now an RCE in your environment. And it can talk to the persistent data stores. And can exfiltrate that data. And can be used to pivot to other internal services that may not have been exposed… and let’s hope the parent host has a patched kernel so it doesn’t allow container escape.
From a security perspective we need to look at containers as if they are a VM of their own. They have the same vulnerability footprint as a real OS, plus new additional ones. We can’t use traditional external-scans to help detect faults; they were never a good idea, and simply won’t scale to a dynamic elastic environment. We need to build this in.
That’s not stopping DevOps; we need to embed the culture of security into the development pipeline. Have your Jenkins job do a vuln scan during your auto-build. Have a deployment engine that tracks what is where. Have a control mechanism to hard-stop pwned services.
Sorry; despite the lure of freedom, containers must not become the wild west; we’ll lose, big time, if we let it.