The network perimeter is no longer viable.
"It is no longer feasible to simply enforce access controls at the perimeter of the enterprise environment and assume that all subjects (e.g., end users, applications, and other non-human entities that request information from resources) within it can be trusted.”
— Line 259, Page 1, SP 1800-35b from the National Institute of Standards and Technology (NIST).
Enterprise environments are facing the Perimeter Problem: the traditional perimeter-defense is failing them, and it’s progressively becoming worse. The Identity Theft Resource Center tracked a record amount of data breaches in 2021 and 2022 was only 60 events short of that record, “due in part to Russia-based cybercriminals distracted by the war in Ukraine and volatility in the cryptocurrency markets.”
The perimeter-defense made sense in the past. Enterprises had their own buildings where they could control access and all sensitive assets and resources were in the building. Organizations could reasonably ensure nobody unauthorized entered the building and enforce access.
However, times have changed. The rise of cloud computing, mobile devices, and remote work have blurred the edges of the network perimeter.
This post discusses the three main problems associated with perimeter-based security, namely:
Defining the Perimeter
Tunnels in the Defense
Insider Threats
And the proposed solution: going perimeter-less with zero trust architecture.
To understand the Perimeter Problem, we must first understand the perimeter. This refers to the boundary that separates an organization's internal network from external networks, such as the internet. Also known as the network perimeter, it usually contains:
Firewalls
Intrusion detection and prevention systems
Access control mechanisms
Organizations adopted the perimeter-defense when it made sense: everything outside is scary and untrusted, while everything inside is safe and trustworthy. Enforcing access controls correctly at the perimeter ensures that nothing dangerous should ever get inside the network.
But this has three problems:
You can only defend a perimeter you can define
Tunneling past your own defenses
Insider threats — how do you defend against what’s already inside?
You have a castle. At first, the castle has only one gate called the Firewall on the southern side. Guards at all times check people entering. All access in and out of the castle must go through the Firewall gate. But workers began complaining about the travel distance from the northside fields to the southern gate. To address this, you accommodate them by redefining the northside fields as part of the castle’s territory. Then you grant access by opening a hole in your north wall because fences are now considered part of the perimeter. It's easy to build new wooden fences whenever new extensions are required!
In the past, organizations could rely on perimeter-based security solutions to protect their assets. Organizations hosted their own infrastructure and kept all data safely within the physical boundaries. Monitor physical entry to the building and gate all connections with a firewall.
However, then came the rise of cloud computing, mobile devices, and remote work all blurring the perimeter’s edges.
Ask any network administrator which is easier to protect: a network that's fully contained within the corporate building or a network that's cloud-hosted, serving multiple locations, and intends to be accessed from anywhere.
Maybe that’s why companies are leaving the cloud and embracing edge deployments; they’re trying to redraw defined perimeters again. There is a strong argument that only organizations still using the contained and self-hosted on-premises devices have a full understanding of where their network perimeter ends and where the dangerous internet begins.
When the perimeter’s edges look different every other day, how dynamic is your ability to defend that? Provide too broad of a defense and you inhibit workflow and productivity; provide too little and you expose your internal network to external access.
But remote work and access is too valuable to simply give up.
To address this, some network infrastructures use VPNs to provide tunneling while simplifying the work of defining the network’s boundary and perimeters. Except these entry points provide a new problem, that being…
A new method has arisen: your chief architect proposes that instead of knocking down holes in your wall, they build a secure tunnel through your wall that extends to the northside fields. Guards at the entrance to this tunnel must check farmhands wanting to enter the castle's grounds. But remember: the faraway field are now considered part of the castle. So long as these farmhands pass the checks, the castle has reason to believe these farmhands are safe and trustworthy.
When a VPN connection begins, it creates a secure tunnel between the remote device and the company network. But let’s call this what it is: an entry point.
The perimeter-defense relies on checking authentication and authorization at each entry point. The network assumes that any user inside the perimeter is trusted. All of this works well until you realize your internal network is still vulnerable to whatever comes through these tunnels. Remember what NIST says: the flawed assumption is that what’s on the inside is safe and trustworthy. It isn't.
Sure, one can argue that multiple firewalls, network segmentations, and other techniques can mitigate this risk — but creating and granting these privileged access user roles for each use case either scales horribly or becomes a nightmare to manage. At some point, either due to resource or maintenance reasons, the perimeter-defense will always end up exposing at least some part of your internal network to any malicious activity (hacker or insider) to lateral movement resulting in breaches.
There’s a reason why NIST advocates against VPNs:
"Remote enterprise assets should be able to access enterprise resources without needing to traverse enterprise network infrastructure first. For example, a remote subject should not be required to use a link back to the enterprise network (i.e., virtual private network [VPN]) to access services utilized by the enterprise and hosted by a public cloud provider (e.g., email).”
Making it worse, these Layer 4 tunnels provide limited visibility into the data traveling through Layer 7 traffic, being unable to provide real time analytics without considerable tradeoffs. While NextGen VPNs offer some improvements to logging and auditing capabilities, they still rely on the same basic tunneling technology and are therefore still vulnerable to this same issue.
And logging correctly matters, because…
Echoing what NIST says: It is no longer feasible to simply enforce access controls at the perimeter of the enterprise environment and assume that all subjects within it can be trusted.
Malicious or negligent, the problem is the same: what happens when the problem is users or devices you already trust? NextGen or not, VPNs rely on the perimeter-defense so there will always be a concept of the “trusted inside entity, trusted inside space.”
But as supply-chain hacks, socially engineered users, corporate sabotages, and attempts at IP theft increase in frequency, organizations are forced to wrangle with the new truth: you might already be hacked.
Sysadmins and DevOps teams should assume breach. When one considers this reality, every single firewall, perimeter, and network segmentation they’ve built is rendered meaningless because they are guarding against the outside when the threat is already on the inside.
This why why zero trust exists. Instead of enforcing access controls at the network perimeter, each individual resource should be capable of authentication and authorization on its own.
Or as NIST puts it:
“Access controls can be enforced on an individual resource basis, so an attacker who has access to one resource won’t be able to use it as a springboard for reaching other resources.”
There is no perimeter. There is no “trusted inside” and “scary outside” because where the requesting user sits is not a good basis for providing access. Everything and anything that tries to access a resource is inherently untrusted until it proves itself trustworthy via identity, device, and request context.
This security model is the heart of zero trust, which assumes that every user and device accessing the network is a potential threat.
Legacy tools and infrastructure may not have access control capabilities. Moreover, getting every last application to use TLS or other authentication is a non-trivial project.
Luckily, there exists a class of tools that can do this: the reverse-proxy.
By simply putting a reverse-proxy in front of each resource, the reverse-proxy can act as the access control gateway. This would easily fulfill NIST’s recommendation of enforcing access control on an individual resource basis without needing to purpose-build access controls into each resource.
We understand that such a fundamental shift could never happen overnight, so it should be a gradual roll-out across an enterprise environment.
If you just want to start implementing access control to sensitive resources today, Pomerium is an open-source context-aware access gateway to secure access to applications and services following NIST’s best practices as outlined above.
Our users depend on Pomerium to secure zero trust, clientless access to their web applications everyday.
You can check out our open-source Github Repository or give Pomerium a try today!
Stay up to date with Pomerium news and announcements.
Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.
Company
Quicklinks
Stay Connected
Stay up to date with Pomerium news and announcements.