Cisco ACI 2.0: Multi-Pod Fabric
Today I am going to talk about Cisco ACI 2.0 Multipod Fabric
infrastructure and the components involved. Cisco ACI is one of the demanding
technology in the market for data center policy based environment
infrastructure.
The basic evolution of ACI stretched fabric is multi-pod. The control protocols that run inside the fabric (ISIS, COOP, MP-BGP) are different instances at different sites, which means failure domains are now split between pod’s contributing to increase the overall design resilience.
The basic evolution of ACI stretched fabric is multi-pod. The control protocols that run inside the fabric (ISIS, COOP, MP-BGP) are different instances at different sites, which means failure domains are now split between pod’s contributing to increase the overall design resilience.
![]() |
Fig 1.1- basic design of Cisco ACI Multipod |
Even though control domain are separated we still have same
change-domain, meaning configuration or policy definition applied to any of the
APIC nodes would be propagated to all the Pods managed by the single APIC
cluster as shown in Figure 3 below.
The different Pods may represent different islands (rooms,
halls) deployed in the same physical data center location, or could map to
geographically dispersed data centers (up to 10 msec RTT latency).
Different workloads that are part of the same functional
group (EPG), like Web servers, can be connected to (or moved across) different
Pods without having to worry about provisioning configuration or policy in the
new location.
At the same time, seamless Layer 2 and Layer 3 connectivity
services can be provided between endpoints independently from the physical
location where they are connected
From a physical perspective, the different Pods are
interconnected by leveraging an “Inter-Pod Network” (IPN). Each Pod connects to
the IPN through the spine nodes; the IPN can be as simple as a single Layer 3
device, or can be built with a larger Layer 3 network infrastructure.
- IPN devices allows for the establishment across Pods of
spine-to-spine and leaf-to-leaf VXLAN tunnels. There are few basic requirements
for a switch platform to be chosen as IPN
Platform should support PIM-BIDIR. This helps BUM traffic to be exchanged between endpoints in different data-centers and also allows to build a source specific multicast tree within fabric. - Should support handling jumbo frames. Jumbo MTU support (9150B) to handle VXLAN encapsulated traffic
- Support for DHCP relay
Running a separate instance of the COOP protocol inside each Pod implies that information about local endpoints (MAC, IPv4/IPv6 addresses and their location) is only stored in the COOP database of the local spine nodes.
Since we need to provide consistent view of endpoints across all sites an overlay control plane is running between spines and used to exchange reachability information. The overlay protocol used to exchange L2 and L3 info is MP-BGP.