“If you really want to protect your network, you really have to know your network.”
The recommendation is now more important than ever. The complexity and the size of computer networks are growing exponentially. We are adding more than a billion connected devices every single year worldwide. On top of the growing reach of networks, cyber threats are getting more sophisticated. The 2020 SolarWinds supply chain attack showed that a patient adversary could orchestrate a highly elaborate operation to move laterally through modern organization networks.
Network visibility is paramount to gain situational awareness and reduce the exposure of our critical assets. The majority of organizations still lack proper visibility to efficiently defend themselves. The urgency to improve the situation is heightened in the case of operational technology (OT) networks where cyber attacks could cause heavy damage to industrial equipment or even loss of life. The latest ICS/OT Cybersecurity Year In Review from Dragos determined that 86% of service engagements have a lack of visibility across OT networks.
A challenge when starting a network visibility project is that different groups of stakeholders have different definitions of what it represents. This article aims at clarifying the two important building blocks of a comprehensive network visibility program. Those two building blocks are part of a larger risk assessment strategy to develop cyber resiliency, which means the ability to keep running mission-critical operations despite being under threat.
Monitoring vs. Modeling
The best practices scoped by the NIST Special Publication 800-160 on Developing Cyber-Resilient Systems aim to help organizations reach the capability to anticipate, withstand, recover from, and adapt to adverse conditions. The NIST document references 14 techniques that are the building blocks of cyber resiliency and that are organized into four categories:
- Visibility & Understanding
- Least privilege principle
- Agile recovery capabilities
Network visibility is covered by the following two building blocks under Visibility & Understanding:
- Analytic Monitoring: Monitor and detect adverse actions and conditions in a timely and actionable manner.
- Dynamic Representation: Keep representation of the network current. Enhance understanding of dependencies.
Those two sides of network visibility are both crucial and complementary to each other.
Live Traffic Monitoring with Network Sensors
Analytic monitoring means understanding which assets are connecting to which services right now. It’s a reactive technique that relies on network instrumentation such as TAP or SPAN to collect live traffic and dissect protocols through deep packet inspection. It provides visibility on all active end points that communicate through network paths on which a sensor has been deployed. It’s the go-to approach for threat hunting and intrusion detection.
Offline Network Modeling with Firewall Configurations
Dynamic representation means understanding which assets can connect to which services. It’s a proactive technique that relies on configuration files from firewalls, routers, and layer-3 switches to model the network topology and analyze connectivity paths. It provides accurate visibility of the network architecture and enables risk assessment without having to deploy any sensor nor agent in the environment.
Complementary Approaches: Where to Start?
Each approach enables answering a different set of questions. On the one hand, network traffic monitoring is extremely useful to identify compromised assets and exploited vulnerabilities. It’s also useful to detect if sensitive information is being exfiltrated, or if a connected service is misconfigured. On the other hand, network access modeling enables the proactive verification of network segmentation, as well as understanding if critical vulnerabilities are exposed on the network. It’s also important to measure risks related to remote access and to simulate possible network attack paths.
There is no doubt that to become cyber resilient, organizations have to invest in both techniques so they eliminate all blind spots through a comprehensive network visibility program. An important question remains where to start? The answer depends on resources available and time. Instrumenting a network to collect live traffic data typically requires multiple months of deployment. Collecting configuration files to build a network model usually takes only a few days. Start with modeling to gain fast and accurate understanding of your network architecture. This knowledge will then help plan for the deployment of sensors in order to augment your network visibility with live data.
Robin Berthier is Co-Founder and CEO of Network Perception, a startup dedicated to designing and developing highly-usable network audit solutions. Berthier has over 15 years of experience in the design and development of network security technologies. He received his PhD in the field of cybersecurity from the University of Maryland College Park and served the Information Trust Institute (ITI) at the University of Illinois at Urbana-Champaign as a Research Scientist.