A few months back I was invited to an RFG Exchange Rounds taping, on containers. There were a number of big name vendors there. I got invited as an end user with opinions :-)
The published segment is on youtube under the RFG Exchange channel.
Unknown to me, Mark Shuttleworth (Canonical, Ubuntu) was a “headline act” at this taping and I got to hear some of what he had to say, in particular around the Ubuntu “LXD” implementation of containers. I’m not convinced he’s going in the right direction and I pulled what my girlfriend calls “sceptical face”.
Beyond the problem of “machine containers” (which are, basically, the
“container as a VM” model, and which
is specifically promoted by LXD), I have a real problem with how he
sees machine containers being used. This is typified by the segment starting
at 2 minutes; we can
take an 1990’s Linux install and run it unchanged inside a machine container.
It’s very possible that this would work. I ran old 90’s compiled
executables for many years; the Linux kernel is pretty good at backwards
compatibility in this way.
(I doubt a real old a.out
format system would run, but it’s likely an
ELF
system should… albeit with some kernel warnings about deprecated
syscalls).
The discussion (which doesn’t appear to be in this video, unless I’ve missed it) also touched on “run anything”, with Mark saying that, absolutely, this is a supported use case.
The problem is “just because you can, doesn’t mean you should”. The “run anything” model leads to Wild West deployments.
In an enterprise environment we spend many many hours in standardising our setup. We force a standardised operating system (RedHat or Ubuntu LTS or SUSE, typically, for stability). We force a common identity and access management (I&AM) solution. We have a standardised software stack deployed as part of the build (monitoring, backup, scanning, privileged control…).
We don’t want people running anything they like. I don’t want the introduction of Debian installs inside a machine container inside my RedHat or Ubuntu environment. Yes, LXD supports this; RedHat modified Docker supports this; I could even wrapper generic Docker to do this. But this way leads to insecurity.
Software delivery
I was at a cloud forum last week and a software vendor asked “do you want me to provide my software in the form of a container”.
No, really, I don’t. If this container allows for access to the OS (eg via SSH) then I need to control that access, which means it’s got to be integrated with my I&AM processes and with my monitoring processes. If there’s persistent data then I need to back it up. The whole of the control stack that I’ve optimised for my chosen OS needs to run in that container.
The only time I would accept a containerized application is if it was
a true appliance. An immutable image that is configured via external
data (eg from the docker run
command line) to point to external
resources. Now a container may be a suitable deployment mechanism.
But don’t give me a full machine container and expect me to run it as a virtual machine. I never accepted VMware images for this reason; containers don’t change the security stance.
Conclusion
As I’ve written before, machine containers are perfectly valid use of container technology. But if you treat a container as a VM then you must run it as a VM. This means full integration into your control stack. You don’t (shouldn’t!) let anyone start up a VM on your network with any OS they like in it; you shouldn’t let anyone start a machine container with untrusted unknown content.