The Architecture of Coexistence
![]() |
| Copyright: Sanjay Basu |
Sovereignty in Shared Space
How MPLS VRFs Taught Me Everything I Know About Digital Borders, Identity, and the Paradox of Isolation
The year was 2008. The word “cloud” still meant something meteorological to most enterprise executives. Yet in the bowels of Perot Systems’ data centers, we were wrestling with a problem that would define the next two decades of computing: how do you let multiple organizations occupy the same physical infrastructure while maintaining the absolute fiction that they’re alone?
I call it a fiction deliberately. Because that’s what it is. A carefully constructed illusion. And understanding why that fiction works, why it must work, reveals something profound about the nature of boundaries, identity, and what we mean when we say something is “ours.”
The Assignment That Changed Everything
I was Chief Network Architect in the Office of the CTO, reporting to David Crofford. My background was WAN backbone engineering, MPLS, MP-BGP, OSPF, the dark arts of making packets traverse continents while pretending the network was simpler than it actually was. I’d spent years building the invisible highways that connected enterprises to their outsourced data centers.
Then came the assignment: build the first Perot Cloud.
[AND I was not alone as I had three other brothers-in-arms, namely Jeff Mersberger, my mentor and the Network & Security wizard, Bryan Carter, the VMWare guru, and lastly Jon Bryant, our resident Storage guru.]
The requirements were straightforward on paper. Multiple customers. Same physical infrastructure. Complete isolation. Each tenant needed to believe, no, needed to know, that their data, their applications, their entire digital existence was sovereign. Untouchable. As private as if they’d built their own data center.
The budget, naturally, didn’t allow for actually building separate data centers.
So I reached for what I knew. The same architectural patterns I’d used to stretch enterprise networks across continents using MPLS Virtual Routing and Forwarding (VRF) instances. The insight was simple but powerful: if we could create virtual isolation at the WAN edge, why couldn’t we extend that same isolation into the data center itself?
What I didn’t realize at the time was that I was building something more than a network architecture. I was constructing a philosophy of digital existence. A framework for understanding how entities can share the same substrate while maintaining distinct identities. The technical decisions we made in 2008 would become the foundational patterns for every modern cloud platform, every Kubernetes namespace, every VPC you’ve ever provisioned.
The Technical Foundation
VRFs as Parallel Universes
Let me explain what a VRF actually is, because the concept is more profound than most network engineers appreciate.
In traditional routing, a router maintains a single routing table. All connected networks, all learned routes, all destinations live in one unified view of the world. Every packet arriving at the router consults this single table to determine its fate. It’s democratic in a sense, all traffic shares the same understanding of network topology.
A VRF shatters this unity. It creates a completely separate routing instance within the same physical router. Separate routing table. Separate forwarding table. Separate ARP cache. To packets inside a VRF, the routes in other VRFs literally don’t exist. They’re not hidden or filtered, they’re simply not there. The packet can’t reach them because, from its perspective, there’s nothing to reach.
![]() |
| Figure 1 |
Notice something remarkable in that diagram. All three tenants use 10.0.0.0/8. All three use 192.168.1.0/24. In a traditional network, this would be catastrophic, overlapping address space causing routing chaos. But inside their respective VRFs, each tenant’s 10.0.0.0/8 is a completely different entity. Same address. Different universe.
This is the first philosophical insight: identity isn’t about the name you bear, but about the context in which that name has meaning.
Stretching the Paradigm
VRF-Lite in the Data Center
The WAN was the easy part. MPLS PE (Provider Edge) routers had supported VRFs for years. The challenge was extending this isolation into the data center fabric, where we couldn’t run full MPLS.
Enter VRF-Lite.
VRF-Lite provides the logical separation of VRFs without the MPLS label-switching overhead. On Cisco 6500s, 7600s, and later Nexus 7000 series switches, we could create the same multi-tenant isolation using 802.1q VLAN tagging to carry VRF-aware traffic across the data center.
The architecture looked like this:
![]() |
| Figure 2 |
Each VRF extended seamlessly from the MPLS WAN edge through the data center core, down through aggregation and access layers, all the way to the hypervisor’s virtual switch. A packet from Tenant A’s branch office would traverse the MPLS backbone inside VRF_TENANT_A, enter the data center still inside VRF_TENANT_A, and reach Tenant A’s virtual machines without ever touching, or even seeing, Tenant B’s infrastructure.
The VLAN-to-VRF mapping was the critical piece. Each tenant got a range of VLANs dedicated to their VRF:
![]() |
| Figure 3 |
The Nested VLAN Problem
Recursion as a Design Pattern
We quickly hit a scaling problem. The 802.1q standard allows 4,094 VLANs. Sounds like a lot until you realize each tenant needs multiple VLANs for network segmentation within their environment. With ambitious growth targets, we’d exhaust VLAN space before we exhausted rack space.
The solution was nested VLANs, 802.1ad, also called Q-in-Q or provider bridging. The concept is elegant: wrap one VLAN tag inside another. The outer tag identifies the tenant; the inner tag identifies the network segment within that tenant.
![]() |
| Figure 4 |
A packet from Tenant A’s web tier would carry:
• Outer VLAN (S-VLAN): 100 (Tenant A)
• Inner VLAN (C-VLAN): 10 (Web Tier)
A packet from Tenant B’s web tier would carry:
• Outer VLAN (S-VLAN): 200 (Tenant B)
• Inner VLAN (C-VLAN): 10 (Web Tier)
Both use inner VLAN 10 for their web tier. No conflict. The outer tag creates the separation.
This recursive encapsulation reveals the second philosophical insight: boundaries are not absolute, they’re contextual. What constitutes “inside” and “outside” depends entirely on which layer of abstraction you’re observing.
VXLAN
When Even Q-in-Q Isn’t Enough
As virtualization density increased, even Q-in-Q hit limitations. The 12-bit inner and outer VLAN IDs gave us theoretical maximums, but practical deployment constraints, spanning tree domains, MAC address table sizes, broadcast domain management, demanded something more scalable.
VXLAN (Virtual Extensible LAN) emerged as the answer. It took the encapsulation concept to its logical extreme: wrap Layer 2 frames inside UDP packets, giving us a 24-bit Virtual Network Identifier (VNI). That’s 16 million possible segments.
![]() |
| Figure 5 |
But VXLAN wasn’t just about scale. It fundamentally changed the relationship between the physical and logical network. The underlay, the actual switches and cables, became nothing more than a transport mechanism. The overlay, the VXLAN segments mapped to tenant VRFs, became the “real” network from the tenant’s perspective.
This inversion is the third philosophical insight: the physical substrate becomes invisible when the abstraction is complete enough. Tenants don’t care about spine-leaf topologies or ECMP hashing. They care about their networks, their security boundaries, their sovereignty.
The Complete Architecture
End-to-End Isolation
Let me show you what the complete architecture looked like, from a tenant’s branch office to their virtualized workloads:
![]() |
| Figure 6 |
The beauty of this design was its consistency. The same VRF that existed on the PE router existed, logically, on the Nexus 7500, on the 6500 aggregation layer, and conceptually even in the hypervisor’s virtual switch configuration. A unified security and isolation domain stretching from WAN edge to virtual machine.
The Control Plane
MP-BGP as the Source of Truth
VRFs provide data plane isolation, but the control plane, how routes are learned and distributed, is equally critical. Multi-Protocol BGP (MP-BGP) with VPNv4 address families became the glue that held everything together.
![]() |
| Figure 7 |
Route Targets (RTs) determined which VRFs could see which routes. A route exported with RT 65000:100 would only be imported by VRFs configured to accept RT 65000:100. This gave us fine-grained control over information sharing, most tenants were completely isolated, but we could selectively allow controlled inter-tenant communication when business requirements demanded it.
The route reflector architecture scaled this to hundreds of PE routers without requiring full-mesh iBGP peerings. Every PE advertised its VRF routes to the route reflectors, which selectively reflected them to other PEs based on RT configuration.
This is the fourth philosophical insight: sovereignty isn’t about complete isolation, it’s about controlled interaction. The ability to choose what enters and exits your domain.
Security Beyond Isolation
Defense in Depth
VRF isolation was necessary but not sufficient for true multitenancy. We layered additional security controls throughout the architecture:
![]() |
| Figure 8 |
VRF-aware ACLs ensured that even if a routing misconfiguration occurred, traffic couldn’t escape its VRF boundary. Zone-based firewalls provided stateful inspection within each tenant’s traffic flows. Per-VRF IPsec tunnels encrypted traffic even as it traversed our “trusted” MPLS backbone, because trust is a vulnerability, not a feature.
The monitoring layer was perhaps most critical. Each VRF had its own NetFlow export, its own traffic analysis, its own alerting thresholds. A DDoS attack against Tenant A’s infrastructure wouldn’t mask anomalies in Tenant B’s traffic patterns. Sovereignty extended to visibility.
The Philosophy of Digital Borders
Now we arrive at the deeper question. Why does any of this matter beyond the technical implementation?
Consider what we built. Multiple organizations, potentially competitors, sharing the same physical switches, the same power supplies, the same cooling systems. Their packets traversing the same fiber. Their data resting on the same storage arrays. Yet each maintained absolute conviction in their isolation, their security, their sovereignty.
This works because sovereignty isn’t about physical separation. It never has been.
Think about national borders. The air doesn’t stop at the border. Neither does radio waves, or migrating birds, or river water. What makes a border meaningful is the agreement about what can and cannot cross it, and the enforcement mechanisms that uphold that agreement.
VRFs are border enforcement mechanisms. Route Targets are immigration policies. ACLs are customs checkpoints. The physical infrastructure is shared geography, contested, overlapping, fundamentally common, but overlaid with logical structures that create meaningful separation.
![]() |
| Figure 9 |
The Trust Problem
When Isolation Isn’t Enough
Here’s where the philosophy gets uncomfortable. VRF isolation is cryptographically unenforced. It relies on correct configuration. A misconfigured route leak, an incorrectly applied ACL, a fat-fingered VLAN assignment, any of these could breach the isolation boundary.
This is fundamentally different from cryptographic isolation. When you encrypt data with a key that only you possess, the isolation is mathematically enforced. No configuration error can accidentally expose your data (though configuration errors in key management certainly can).
VRF-based multitenancy occupies an interesting middle ground. It’s stronger than “promise not to look at each other’s data” but weaker than “mathematically impossible to access each other’s data.” It’s administrative isolation, not cryptographic isolation.
![]() |
| Figure 10 |
For most use cases, administrative isolation is sufficient. The threat model isn’t “malicious insider with root access to the network control plane.” It’s “accidental exposure due to misconfiguration” or “tenant A’s security breach shouldn’t impact tenant B.” VRFs handle these beautifully.
But this trust model requires something that pure cryptographic isolation doesn’t: faith in the operator. Tenants must trust that Perot (later Dell) maintained configuration hygiene. Trust that change management processes caught errors. Trust that monitoring systems would detect anomalies.
This reveals the fifth philosophical insight: sovereignty in shared systems always involves trust in the mediating authority. The question isn’t whether to trust, but whom, and for what.
Evolution
From VRFs to VPCs to Namespaces
The patterns we established at Perot in 2008 became the blueprint for modern cloud architecture. When Amazon Web Services introduced Virtual Private Clouds (VPCs), they were implementing the same fundamental concept: isolated routing domains for each tenant. When Kubernetes introduced namespaces and network policies, same idea. When zero-trust networking emerged with micro-segmentation, still the same idea.
![]() |
| Figure 11 |
The abstraction layers keep piling up. Containers within VMs within VPCs within availability zones within regions. Each layer adds another boundary, another policy enforcement point, another opportunity to define “inside” and “outside.”
But the fundamental question remains the same: how do we occupy shared space while maintaining distinct identity?
Practical Lessons for Modern Architects
Let me distill the lessons from this journey into actionable principles:
First, design for failure at every boundary. Assume your VRF will leak routes. Assume your VXLAN will have a misconfigured VNI. Then ask: what additional controls prevent that failure from becoming catastrophic? Defense in depth isn’t paranoia; it’s engineering prudence.
Second, make isolation visible. Every tenant should be able to verify their isolation without trusting your word. Per-tenant monitoring, per-tenant logging, per-tenant security scanning. Transparency builds trust more effectively than promises.
Third, control the control plane. Data plane isolation is useless if the control plane is compromised. MP-BGP route leaks, SDN controller vulnerabilities, API server exploits, these bypass all your beautiful VRF configurations. The control plane is the keys to the kingdom.
Fourth, plan for selective permeability. Pure isolation is rarely the goal. Tenants need to reach shared services, connect to partners, access the internet. Design your boundaries with well-defined, auditable crossing points rather than pretending isolation is absolute.
Fifth, understand your trust model. Are you providing administrative isolation or cryptographic isolation? Does your threat model include malicious operators? Be honest about what you’re actually protecting against, and don’t oversell your security guarantees.
The Enduring Questions
Nearly two decades later, I still find myself returning to the questions that architecture raised.
What does it mean to be sovereign in a shared world? Not just for networks, but for nations, for individuals, for digital identities? We all share the same planet, the same internet, increasingly the same AI systems that shape our information environment. Where do the meaningful boundaries exist?
Is isolation even desirable? The network effects that make shared infrastructure valuable are the same ones that make isolation costly. Every tenant we added to Perot Cloud reduced per-tenant costs. Every VRF that couldn’t communicate with other VRFs missed opportunities for collaboration. The tension between isolation and interconnection never resolves; it just finds temporary equilibria.
And perhaps most fundamentally: when we create the illusion of isolation, when tenants feel alone even as they share substrate, have we created something real? Is sovereignty a physical fact or a psychological state?
The VRF on a Nexus 7500 doesn’t know it’s isolated. It doesn’t experience sovereignty. It just has a routing table that doesn’t include certain entries. The sovereignty exists in the tenant’s perception, in their operational confidence, in the trust relationship with the operator.
Maybe that’s all sovereignty ever is. Not an absolute condition, but a relationship. A set of agreements, enforced by mechanisms we collectively accept. The fiction becomes real when enough entities believe in it and act accordingly.
In 2008, I thought I was building a cloud platform. I was actually building a philosophy of coexistence encoded in routing tables and VLAN tags. The packets don’t care about borders. But we do. And in that caring, in that persistent need to define what is ours and what is theirs, we create the very boundaries we then struggle to maintain.
The architecture works. It has worked for two decades. Millions of organizations trust their most sensitive operations to shared infrastructure protected by the descendants of those VRF configurations we deployed in Plano, Texas.
But the questions remain. They always will. Because the technology answers “how” while we keep asking “why.”
The author served as Chief Network Architect in the Office of the CTO at Perot Systems from 2006–2009, where he designed and implemented the multitenancy architecture that became the foundation for Dell Services’ vCloud platform.












Comments
Post a Comment