It’s a fairly common design in enterprise networks; a three tier network architecture, with firewalls between the tiers.
Typically these layers are split up with variations of the following names:
- Presentation Layer (Web)
- Application Layer (App)
- Data (or storage) Layer (Data)
Typically you may have additional tooling in front of each layer; e.g a load balancer, a web application firewall, data loss protection tools, intrusion detection tools, database activity monitoring…
This gives a relatively good set of protections; attackers from the internet can only see servers in the DMZ; if this server does get attacked then the intruder can not see the data layer; they can only see the application server.
It’s not perfect, of course. Many organisations don’t have much protection between servers inside each tier; an attacker who breaks into one web server may be able to attack a second web server and use this to pivot through the organisation. “Micro segmentation” is a concept that could protect against this, but it’s very difficult to retro-fit this into an existing environment.
Modern application design
Unfortunately this type of environment doesn’t necessarily work so well with modern dynamic applications. If you deploy something like CloudFoundry then your presentation layer and your application layer may both be on the same network segment, or even running on the same server. With docker compose or swarm you can build a 3-tier design on a single machine, and you can have many 3-tier apps all segregated from each other on the one box. Dynamic scaling can mean that firewall rules may have to be updated programmatically. Network Address Translation (NAT) can make it difficult to write firewalls at all, since the firewall can no longer distinguish between the applications.
The new 3-tier network
The idea of the 3-tier network is a good one, and we should try and keep it. But the separation of network control devices (eg firewalls) and application servers causes issues. Fortunately technology has moved on and we can start to take advantage of this
Tier 1
Modern load balancers can do a lot more than just direct traffic. They can also inspect it, acting as a WAF. They can validate API traffic matching XML or JSON rules. They can act as authenticating endpoints, eg talking back to an LDAP server…
Pretty much a modern device like this can be the presentation layer. Start to remove traditional web servers out of the DMZ and replace them with these appliances. This is a good thing to do, anyway, because it reduces the footprint of devices exposed to the internet. Why run hundreds of full RedHat instances and JBoss servers with the associated management requirements when a handful of dedicated appliances can do the job?
Once we accept the appliance can do the presentation layer role then we’ve also removed one of the problems holding up the deployment of modern applications.
Tier 2 and 3
This is where things get a little more challenging, but where the concept of micro-segmentation comes in.
With Amazon Security Groups, with Docker Swarm networks, with Kubernetes network structures, with Cloud Foundry Application Security Groups… all of these technologies can control the egress of traffic from a container or server, and in some cases also control ingress rules. These rules allow you to define tightly controlled access paths; application A instances can only talk to other services and data stores that have been authorised. Since the rules are defined to the application it doesn’t matter where they are running; the rules follow the instance.
This is new and scary for traditional network architects, who like to know where traffic flows are. Paradoxically this lack of knowledge is what increases security; we don’t need to care about IP addresses any more; instead we define rules in terms of intent:
- “I want application A web server to talk to API servers from application B and C”
- “I want application B API servers to talk to application B data servers”
- “I want application C API servers to talk to application C data servers”
Summary
This sort of approach is still its infancy in many places. It’s very hard to retro-fit this model onto existing networks; it requires a massive discovery process, especially when the majority of communication may be within a single tier and so no existing firewall rules may be present.
In new deployments, however, it can be made a requirement. Ensure your Amazon instances are behind locked down security groups that restrict egress as well as ingress. Monitor deployments and alert on overly open access (does your database server really need the ability to reach any server on the internet?). Set up flowlogs if you need traffic logging.
In a container world, use orchestration tools that can define communication between services.
We’re now moving from a “discovered” communication model to a “defined by intent” model.
The 3-tier network isn’t dead, after all… it just changed to a multi-tier micro-segmented network!