In previous posts I pointed out why TLS is important, how to configure Apache to score an A+ and how to tune HTTP headers. All this is dependent on getting an SSL cert.
Some jargon explained Before we delve into a “how to”, some basic jargon should be explained:
SSL/TLS TLS (“Transport Layer Security”) is the successor to SSL (“Secure Socket Layer”). SSL was created by Netscape in the mid 90s (I remember installing “Netscape Commerce Server” in 1996).
A few months back I was invited to an RFG Exchange Rounds taping, on containers. There were a number of big name vendors there. I got invited as an end user with opinions :-)
The published segment is on youtube under the RFG Exchange channel.
Unknown to me, Mark Shuttleworth (Canonical, Ubuntu) was a “headline act” at this taping and I got to hear some of what he had to say, in particular around the Ubuntu “LXD” implementation of containers.
A couple of weeks back I got a new case for my PC. Previously I was using a generic mini-tower and then had an external 8-disk tower (Sans Digital TR8MB) connected via an eSATA concentrator (4 disks per channel).
It’s been working OK for years but every so often the controller would reset (especially under write loads); no data lost but annoying. Also after a power reset (eg a failure, or maintenace) then frequently one or two disks (slot 2 in both halves!
Containers aren’t secure… but neither are VMs An argument I sometimes here is that large companies (especially financial companies) can’t deploy containers to production because they’re too risky. The people making this argument focus on the fact that the Linux kernel is only a software segregation of resources. They compare this to virtual machines, where the CPU can enforce segregation (eg with VT-x).
I’m not convinced they’re right. It sounds very very similar to the arguments about VMs a decade ago.
My home server I was doing an upgrade on my home “server” today, and it made me realise that design choices I’d made 10 years still impact how I build this machine today.
In 2005 I got 3*300Gb Maxtor drives. I ran them in a RAID 5; that gave me 600Gb of usable space. It worked well.
In 2007 I upgraded the machine to 500Gb disks. This required additional SATA controllers, so I got enough to allow new and old disks to be plugged in at the same time (cables galore).
In previous posts, and even at Cloud Expo, I’ve been pushing the idea that it’s the process that matters, not the technology when it comes to container security. I’ve tried not to make claims that at tied to a specific solution, although I have made a few posts using docker as a basis.
I was recently asked my thoughts on Docker in Production: A History of Failure .
Basically they can be boiled down to “it’s new; if you ride the bleeding edge you might get cut”.
In previous articles I’ve explained how to use traditional SSH keys and why connecting to a wrong server could expose your password. I was reminded of a newer form of authentication supported by OpenSSH; CA keys.
The CA key model is closer to how SSL certs work; you have an authority that is trusted by the servers and clients, and a set of signed keys.
Creating the CA key Creating a certificate authority key is pretty much the same as creating any other key
Modern web browsers have a number of settings to help protect your site from attack. You turn them on by use of various header lines in your HTTP responses. Now when I first read about them I thought they were not useful; a basic rule for client-server computing is “don’t trust the client”. An attacker can bypass any rules you try to enforce client side.
But then I read what they do and realised that they are primary to help protect the client and, as a consequence, protect your site from being hijacked.
(Side note: in this post I’m going to use TLS and SSL interchangably. To all intents and purposes you can think of TLS as the successor to SSL; most libraries do both).
You can think of security as a stack. Each layer of the stack needs to be secure in order for the whole thing to be secure. Or, alternatively, you can think of it as a chain; the whole thing is only as strong as the weakest link.
In my previous post I wrote about some automation of static and dynamic scanning as part of the software delivery pipeline.
However nothing stays the same; we find new vulnerabilities or configurations are broken or stuff previously considered secure is now weak (64bit ciphers can be broken, for example).
So as well as doing your scans during the development cycle we also need to do repeated scans of deployed infrastructure; espcially if it’s externally facing (but internal facing services may still be at risk from the tens of thousands of desktops in your organisation).