Unless you’ve been living in a cave for the past couple of months, you’ll have heard that Equifax, one of the ‘big three’ credit reporting agencies, suffered a massive breach leaking privileged data on over 143 million US people (and millions outside the US as well).
The story went from bad to worse as the company completely failed to handle the response properly, with poor communication, staff giving out the URL to phishing sites, web site failures and the story that three executives sold millions of dollars of shares before the leak notification was made.
I was asked an interesting question:
I am about to investigate Docker. We are moving to AWS too. So in your opinion, should I put energy on EC2 Container services or should I put my energy on Docker on EC2? Which is better ?
I find this type of question interesting because there’s not, really, a “one size fits all” answer. It depends on your use cases.
In previous posts I’ve gone into some detail around how Docker works, and some of the ways we can use and configure it. These have been aimed at technologists who want to use Docker, and for security staff who want to control it.
It was pointed out to me that this doesn’t really help leadership teams. They’re getting shouted at; “We need Docker! We need Docker!”. They don’t have the time (and possibly not the skills) to delve into the low levels the way I have.
WARNING: technical content ahead! There’s also a tonne of config files, which make this page look longer than it really is, but hopefully they’ll help other people who want to do similar work.
A few months back I replaced my OpenWRT router with a CentOS 7 based one. This machine has been working very well, and handles my Gigabit FIOS traffic without any issues.
“Those who cannot remember the past are condemned to repeat it.” – George Santayana
Default broken I was reminded, last week, of how old issues repeat.
Back in the 90s it was a truism that if you put an “Out Of The Box” RedHat 4 (not RedHat Enterprise; the original freeware version) server on the internet then it would be compromised within hours. And so we learned; our default builds didn’t have telnet, didn’t have every possible service installed, didn’t have vulnerable configurations.
One of the big problems with a cloudy environment is in how to allow the application to get the username/password needed to reach a backend service (e.g. a MySQL database). With a normal application the application operate team can inject these credentials at install time, but a cloudy app needs to be able to start/stop/restart/scale without human intervention. This can get worse with containers because these may be started a lot more frequently.
As we’ve previously seen, Docker Swarm mode is a pretty powerful tool for deploying containers across a cluster. It has self-healing capabilities, built in network load balancers, scaling, private VXLAN networks and more.
Docker Swarm will automatically try and place your containers to provide maximum resiliency within the service. So, for example, if you request 3 running copies of a container then it will try and place these on three different machines.
In my previous entry I took a quick look at some of the Docker orchestration tools. I spent a bit of time poking at docker-compose and mentioned Swarm.
In this entry I’m going to poke a little at Swarm; after all, it now comes as part of the platform and is a key foundation of Docker Enterprise Edition.
Docker Swarm tries to take some of the concepts of a single host model and convert it into a cluster.
In earlier posts I looked at what a Docker image looks like and a dig into how it looks at runtime. In this entry I’m going to look at ways of running containers beyond a simple docker run command.
docker-compose This is an additional program to be installed, but it’s very common in use. Basically, it takes a YAML configuration file. This can describe networks, dependencies, scaling factors, volumes etc etc.
In the previous entry we looked at how a Docker container image is built.
In this entry we’re going to look a little bit about how a container runs.
Let’s take another look at the container we built last time, running apache:
% cat Dockerfile FROM centos RUN yum -y update RUN yum -y install httpd CMD ["/usr/sbin/httpd","-DFOREGROUND"] % docker build -t web-server . % docker run --rm -d -p 80:80 -v $PWD/web_base:/var/www/html \ -v /tmp/weblogs:/var/log/httpd web-server 63250d9d48bb784ac59b39d5c0254337384ee67026f27b144e2717ae0fe3b57b % docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 63250d9d48bb web-server "/usr/sbin/httpd -.