Welcome to part 5 of the Micro-Segmentation Defined NSX Securing “Anywhere” blog series.Previous topics covered in this series includes Part I Micro-segmentation Defined Part II Securing Physical Environments Part III Operationalizing Micro-segmentation Part IV Service Insertion
In this post we describe how NSX micro-segmentation enables fundamental changes to security architectures which in turn facilitate the identification of breaches:By increasing visibility throughout the SDDC, eliminating all blind spots By making it feasible and simple to migrate to a whitelisting / least privileges / zero-trust security model By providing rich contextual events and eliminating false positives to SIEMs By providing inherent containment even for Zero Day attacks
Threat analysis is the new trend of the security landscape and established vendors as well as startups are proposing many tools to complement the current perimeter logging approach. The attraction for these tools is based on the assumption that by correlating flows from different sources within a perimeter, threat contexts will emerge and compromised systems will be uncovered. Currently, these systems go unnoticed for long periods of times because the suspicious traffic moves laterally inside the perimeter and does not traverse a security device: you can’t protect what you don’t “see”.
While these tools are welcomed additions to the security toolkit, what they imply is that the current perimeter approach fails to provide the proper context and visibility to identify and contain successful breaches. Are these tools being leveraged to their full potential by having them sort through ever increasing amounts of data or could there be some basic changes to the security architecture to provide them less, qualified and context-rich information for them to do their work more efficiently? Context: why should my HVAC control application want to talk to my PoS units?
Security administrators understand the notion of context. They spend a fair amount of time building lists and grouping systems that have common properties: a database or PCI zone, systems that are public facing in a DMZ, users from various groups within a company, etc. Unfortunately, they almost exclusively leverage physical segmentation and IP ranges to convey the context which can only represent one dimension at the time. For example, how do you carve out of your network a VLAN, subnet or network area that would represent:all my “Window IIS servers” plus “the ones used by MS Exchange” from all my “Window IIS servers” plus “those used in Horizon View from all my “Window IIS servers” plus “the ones generated dynamically for developers which need to be isolated from other users and the production network”
A VLAN, subnet or some form of endpoint grouping related to physical networking constructs cannot represent adequately the rich context security administrators would like to attach to applications and systems. Only a layered software perimeter construct that has no dependencies to physical infrastructure can deliver on the compound nature of the context we need to represent.
NSX provides via its Service Composer feature, a completely logical mechanism to group and arrange VM and containers. Membership to a Security Group (“bubbles”) can be defined with complex logic leveraging multiple conditional statements (if / then), Boolean logic (and / or / not) and most attributes the VM or container may possess. Policies attached to the Security Groups will generate events based on the ruleset contained in the policies, including the identity of the user as defined in Active Directory.
The immediate benefit of defining software perimeters in this fashion is that each event sent to your SIEM system is 100% representative of the full context associated with the VM or container. For example, an event could represent a blocked flow from an administrator’s session on one of the IIS servers in MS Exchange located in your DMZ Security Group trying to access an internal system in your PCI Security Group.
That is a lot of context for a single event entry, one I know most security administrator can only dream to get with their current security architecture.Visibility: getting rid of our blind spots
In order to collect more information, threat analysis tools require installing agents on systems, collect flows from the network switches facing the servers or deploy specialized probes inside perimeters. If we look at this more closely, we realize that all these approaches require the collection engine to move closer to the source, i.e. they require the creation of micro-segments that have the capability of not only segregating the traffic sources as a PVLAN could do, but to also inspect that traffic in order to report contextual events.
By instantiating the NSX Distributed Firewall on every VM or container, and possibly allowing our partner’s solution to come and attach themselves at the same point, NSX provides the ultimate micro-segmentation solution, providing full visibility for all the traffic originating or bound for a particular VM or container.
If you consider that in the context of “visibility”, endpoint solutions, probes and flow collectors are really logging agents, you can see how by deploying NSX Distributed Firewall in your environment you get full visibility of all traffic in your data center by design, offsetting the expense of deploying and operating a parallel environment just to see the traffic you are currently blind to.Containment: how to keep the bad guys from moving around too easily
Threat analysis tools are there to alert you if something is going wrong. Once they figure out the nature of the attack, some tools will go a step further and update security devices with new rules to contain the threat. But by that time you will have more than one system compromised and possibly already have to deal with data exfiltration.
So it is not sufficient to identify the attack but we also need to contain it as much as possible while the analysis is ongoing. This is where implementing a whitelisting / least privileges model become a critical element of the architecture. By enabling whitelisting / least privileges, only the allowed communications between known systems are permitted while other combinations are denied. Lateral movement between unrelated systems is by default impossible, making the progression of the attack throughout the data center a much harder thing to achieve.However, this requires you to understand how your applications work and if you ask security administrators if they know all the applications running in their data center, which component of an application should be allowed to talk to other applications, who should be accessing the application, etc. you will 9 times out of 10, get an embarrassed “no” for an answer. That is thereality. Applications get deployed and security administrators go ahead and block known “bad” traffic and open up a few ports, th