I got asked another question. I’m going to paraphrase the question for this blog entry.
Given the Russian invasion of Ukraine and the response of other nations (sanctions, asset confiscation, withdrawal of services, isolation of the Russian banking system…) there is a chance of enhanced cyber attacks against Western banking infrastructure in retaliation. How can we be 100% sure our cloud environments are secure from this?
Firstly, I want to dispel the “100%” myth.
I got asked a question… this gives me a chance to write an opinion. I have lots of them!
Is it reasonable to just stick with a single cloud provider, or is it better to go multi-cloud? It think it seems reasonable. I expect very few places are true multi-cloud, as in a given app runs in two clouds. That becomes challenging if trying to use cloud native services ‘cos how you access RDS would be different to how you access Azure SQL, so writing a true multi-cloud application isn’t so simple.
Here’s a common requirement:
We want to transfer a file containing sensitive data to a partner; they want us to put the data in their S3 bucket. How can we do this securely?
Now you might start with putting controls around the S3 bucket itself; make sure it’s properly locked down, audit logs and so on. But there’s a number of issues with this. In particular, S3 bucket permissions are easy to get wrong.
I was asked a question around the Capital One breach. It seems that, in some areas, fingers are being pointed at Amazon, and they should be held (at least partly) to blame for this.
It also seems as if Senator Wyden is also asking Amazon questions around this.
There’s also a question around Paige Thompson, the hacker, and her previous relationship as an Amazon employee. If she used any insider knowledge to break into Capital One then this would erode a lot of trust in Amazon’s Web Services, and the public cloud in general.
I’ve spent the past far-too-many years working in the finance industry, in mega-banks and card processors. These companies are traditionally very worried about information security. It’s not to say they always do it well (everyone makes a mistake), but it leads to a conservative attitude.
These types of companies end up creating a massive set of standards and procedures to protect themselves. “Thou MUST do this. Thou MUST do that.
Even after all this time I hear statements like “Oh, we can just run our code in the cloud”. This is the core of the lift and shift school of cloud usage.
And these people are perfectly correct; they can just run their stuff in the cloud. But it won’t work so well.
I’ve previously written about lift and shift issues, but here I want to focus on the “resiliency” issue.
Unless you’ve been living under a rock, you may have heard of two panic panic panic bugs, known as Meltdown and Spectre. People are panicking about them because they are CPU level issues that may impact almost every modern CPU around. Meltdown is Intel specific, but Spectre affects Intel, AMD, and potentially others (Redhat claims POWER and zSeries is impacted).
What is the problem? In short, modern CPUs may execute instructions out of order, especially when the order doesn’t matter.
It’s a fairly common design in enterprise networks; a three tier network architecture, with firewalls between the tiers.
Typically these layers are split up with variations of the following names:
Presentation Layer (Web) Application Layer (App) Data (or storage) Layer (Data) Typically you may have additional tooling in front of each layer; e.g a load balancer, a web application firewall, data loss protection tools, intrusion detection tools, database activity monitoring…
“Those who cannot remember the past are condemned to repeat it.” – George Santayana
Default broken I was reminded, last week, of how old issues repeat.
Back in the 90s it was a truism that if you put an “Out Of The Box” RedHat 4 (not RedHat Enterprise; the original freeware version) server on the internet then it would be compromised within hours. And so we learned; our default builds didn’t have telnet, didn’t have every possible service installed, didn’t have vulnerable configurations.
One of the golden rules of IT security is that you need to maintain an accurate inventory of your assets. After all, if you don’t know what you have then how you can secure it? This may cover a list of physical devices (servers, routers, firewalls), virtual machines, software… An “asset” is an extremely flexible term and you need to look at it from various viewpoints to ensure you have good knowledge of your environment.
A while ago I wrote about some of the technology basics that can be used for data persistency. Apparently this is becoming a big issue, so I’m revisiting this from another direction.
Why does this matter? In essence, an application is a method of changing data from one state to another; “I charge $100 to my credit card” fires off a number of applications that result in my account being debited, and the merchant being credited.
Part of any good backup strategy is to ensure a copy of your backup is stored in a secondary location, so that if there is a major outage (datacenter failure, office burns down, whatever) there is a copy of your data stored elsewhere. After all, what use is a backup if it gets destroyed at the same time as the original?
A large enterprise may do cross-datacenter backups, or stream them to a “bunker”; smaller business may physically transfer media to a storage location (in my first job mumble years ago, the finance director would take the weekly full-backup tapes to her house so we had at most 1 week of data loss).
A typical cloud engagement has a dual responsibility model. There’s stuff that can be considered “below the line” and is the responsibility of the cloud service provider (CSP) and there’s stuff above the line, which is the responsibility of the customer.
Amazon have a good example for their IaaS:
Where the line lives will depend on the type of engagement; the higher up the abstraction tree (IaaS->PaaS->SaaS) the more the CSP has responsibility.
In a previous blog entry I described some of the controls that are needed if you want to use a container as a VM. Essentially, if you want to use it as a VM then you must treat it as a VM.
This means that all your containers should have the same baseline as your VM OS, the same configuration, the same security policies.
Fortunately we can take a VM and convert it into a container.
In a lot of this blog I have been pushing for the use of containers as an “application execution environment”. You only put the minimal necessary stuff inside the container, treat them as immutable images, never login to them… the sort of thing that’s perfect for 12 factor application.
However there are other ways of using containers. The other main version is to treat a container as a light-weight VM.
A phrase you might hear around cloud computing is lift and shift. In this model you effectively take your existing application and move it, wholesale, into a cloud environment such as Amazon EC2. There’s no re-architecting of the application; there’s no application redesign. This make it very quick and very easy to move into the cloud. It’s not much different to a previous p2v (physical to virtual) activity that companies performed when migration to virtual servers (eg VMware ESX).
In this glorious new world I’ve been writing about, applications are non persistent. They spin up and are destroyed at will. They have no state in them. They can be rebuilt, scaled out, migrated, replaced and your application shouldn’t notice… if written properly!
But applications are pointless if they don’t have data to work on. In traditional compute an app is associated with a machine (or set of machines). These machines have filesystems.
The core problem with a public cloud is “untrusted infrastructure”. We could get a VM from Amazon; that’s easy. What now? The hypervisor isn’t trusted (non company staff access it and could use this to bypass OS controls). The storage isn’t trusted (non company staff could access it). The network isn’t trusted (non company…).
So could we store Personal Identifying Information in the cloud? Could a bank store your account data in a public cloud?