A major problem many environments have is a lack of real network control inside the perimeter. They may have large hard border controls (multi-tier DMZs; proxy gateways; no routing between tiers), but once inside traffic is unconstrained. This is sometimes jokingly referred to as “hard shell soft center” network design.
If you’re lucky then your prod/dev/qa environments may be segmented. More likely there’s no restriction at all; dev programs may accidentally talk to a prod database. Oops!
And, of course, this “open network” enables an attacker who manages to gain entry to the network the ability to pivot and attack other unrelated servers.
The question to ask is whether we can control things at a deeper layer. Can we determine what applications need to talk to each other? In a microservices model, can we control which service each application component needs to talk to?
In our existing environments this is hard to do; things have grown haphazardly and we have no real visibility around where our “command and control” (C2) servers are; your vuln scanning tool, your backup servers, your CMDB collection tool, your automation tool… these servers may not have been placed in any consistent location and mapping out the connections at scale is not easy.
At best we might be able to restrict “consumer banking” servers from interacting with “investment banking” servers; restrict the ATM network from seeing the HR servers… a form of “macro segmentation”.
In the new “cloudy” world, however, we get a chance at revisiting this. I’ve previously written about automation of application instantiation; hands off deployment of servers, containers, applications. We can take this further and do hands off deployment of networking as well. “Software defined networking” allows us to build and modify network constructs in a programmatic automated manner. We can build logical constructs for different applications, for services, and define rules for communication between these constructs. We can associate these rules with the app deployment so that routing, packet filters (eg iptables) and the like can be updated as part of application orchestration.
Egress rules can also be applied; your app configuration indicates what services it needs to talk to. Your orchestration layer can build out egress iptable rules outside the container (nothing that happens inside the container can change this), and a set of logging rules can be set up so that bad traffic is trapped and reported on.
These logs can be sent to your SIEM system and monitored the same as any other security log. If you now see an application container start to make a lot of port 22 connection attempts then you know something has gone wrong and you can take action (e.g. shutdown the container or track back where the attack came from; analyse the attack vector and fix the code). Of course the rules will have prevented pivoting, but now you also get a deep level of visibility of what is happening inside your network, not just at the border.
Summary
Tools are still developing in this space; there are networking vendors that can handle routing via BGP configurations (your app joins a BGP group and the centralised rules determine what services are visible), combined with kernel iptable/ipset rules configured via the BGP rules. Some of these are drop-in replacements in the OpenStack setup and extends the “security group” construct.
The Cloud Foundry PaaS also has security groups associated with applications that configure egress rules per application component.
If you use Amazon then you can configure “virtual private clouds” and segment networks.
There are multiple ways of doing network segmentation, but taking advantage of application redesign and new automation that needs to be built out to handle a cloudy environment means we can avoid mistakes of the past; increase security in a practical and controlled manner; limit exposure when (not if, when) an attacker gets in; gain greater visibility of your network setup.
Add in something like Google’s BeyondCorp perimeterless network design to handle the physical layer and we’ve gone a long way!