Latest

Cisco Datacenter: Quick Facts about OTV, vPC, VDC and Nexus 7K

Today I am going to discuss facts about the data center infrastructure in enterprise network. We are going to discuss the design of the enterprise network with Cisco devices and products. Before we starting with the Infrastructure of the data center environment.

We should know what is the difference between the simple network ( Small and mid-level enterprises ) and the Datacenter networks. we have the huge requirement of the traffic in the data center environment.

Whenever we are talking about the Datacenter, people really think why there is a huge traffic in Datacenter environment. So if you think the next generation networks all the traffic is in cloud, then what does cloud means to you ?

Do you realize this cloud is the service provider data center environment where all the applications are hosted on the servers ( physical or virtual) with compute or storage capabilities.

Fig 1.1- Datacenter Infra

There are so many big players in the market who provide you the various solution on to the cloud, like Office 365 by Microsoft ( means these applications are hosted in the  data center network ) or Open DNS ( DNS security layer ) is in Cisco cloud  and many other applications by different service provider.

Let's discuss the terms and infrastructure used in the data center environment. In the data center environment a lot of things are evolving like :
  • Datacenter switching
  • Datacenter Storage
  • Datacenter Compute Infrastructure
  • Datacenter hyper-converged Infrastructure
  • Software defined Network and Access
  • Datacenter Security Infrastructure
We will touch every topic in another articles in detail but now we will only discuss the basic infrastructure in the Datacenter switching domain. I am taking this example only with the Cisco equipment

Quick facts:
  • Hardware Recommendations: Cisco Nexus 7K in the core, Cisco Nexus 7K in the distribution and Cisco Nexus 5K and 2K in the Access Layers.
  • There are different kinds of modules used in the Nexus switches like F module ( Switching Modules ) or M modules ( Routing Modules). You can mix them to achieve your desired design the specified network.
  • The concept of vPC( Virtual Port Channel ) used in the datacenter Nexus switching environment.
  • There are different kinds of vPC technique uses like Single sided vPC or Double sided vPC when connected to the host devices in the datacenter environment.
  • Using a single Chassis we can divide the chassis virtually in core and distribution layers. VDC ( Virtual Device Context ) is the technology used for it.
  • OTV ( Overlay Transport Virtualization ) is another technique used in the datacenter environment for interconnection between the two different datacenter communication.
  • OTV ( Overlay Transport Virtualization ) can transport Unicast and Multicast traffic between the two datacenter sites across the globe.
  • Application Centric Infrastructure (ACI) is now another technique to achieve the solution around the east-west traffic in the datacenter network where your source or destination is in your data center network only.
  • Application Centric Infrastructure (ACI) works in Spine-Leaf architecture where you have Cisco Nexus 9500 Series at the Spine and Cisco Nexus 9300 Series at the Leaf.
  • Automation is one of the major role in the environment where automatic policies pushed onto the network like SNMP, Trace-route, Qos or Access lists remotely pushed and Cisco come up with APIC-EM while Arista come up with Cloud-vision solution.
  • Analytics are another major product in the portfolio of the datacenter network and Cisco come up with Cisco Tetration.
Important Facts:
  • vPC concept is similar concept like port channel but the difference is instead of making the port channel between two devices vPC is a port channel between the devices more than two and is used in the data center Network.
  • vPC is also differ from VSS in Cisco 6500 catalyst switch, where VSS has the switches with only one control plane and two Data plane while in vPC environment Nexus Switches have two Data plane and two Control Plane.
  • VDC ( Virtual Device Context) where you have one Cisco Nexus 7K chassis and you virtually divide the chassis in two, three or more VDCs depends upon your Supervisor Engines used in the hardware.
  • If we have the SUP1, we can have maximum of 4 VDC in an environment ( Admin, Core, Distribution and OTV ) VDCs. While if we have SUP2, we can have 5 VDCs and if SUP2e we have 9 VDCs in the data center environment.
  • You can have F and M modules, F represent the Switching modules ( F1, F2, F2e and F3 ) While F1 and F2e modules can mix with M modules in same VDC but F2 module has his own VDC and can't me mix with other modules. In M modules you have ( M1, M2 and M3 ) Modules.
  • We can have the Various Chassis used at Core, OTV and Distribution layers and they are Cisco Nexus 7004, Cisco Nexus 7009, Cisco Nexus 7010 and Cisco Nexus 7018 Chassis.
  • We can have various models at Access layers Cisco Nexus 5000 series and Cisco Nexus 5500 Series with FEX device Cisco Nexus 2232 and Cisco Nexus 2248 depends upon the capacity of the port and the number of the port required for end host.
  • OTV offers multicast and unicast as transports between sites. Multicast is the preferred transport because of its flexibility and smaller overhead when communicating with multiple sites.
  • Cisco Nexus 7000 Series and Cisco ASR 1000 Series both support multicast and unicast cores. For unicast cores the Nexus 7000 Series requires Cisco NX-OS Release 5.2(1) or later and the Cisco ASR 1000 Series requires Cisco IOS-XE 3.9 or later.
  • Cisco Nexus 7000 Series Switches in one site and Cisco ASR 1000 Series routers at another site for OTV is fully supported. For this scenario, please keep the separate scalability numbers in mind for the two different devices, because you will have to account for the lowest common denominator.
  • The site-id command was introduced as a way to harden multi-homing for OTV. It is a configurable option that must be the same for devices within the same data center and different between any devices that are in different data centers.
  • Cisco NX-OS Release 6.2(2) on the Cisco Nexus 7000 Series, F1 and F2e modules can be internal interfaces for OTV. These modules cannot perform OTV functions themselves and can be used only as internal interfaces.
  • OTV on the Cisco Nexus 7000 Series does not allow packet fragmentation. However, the Cisco ASR 1000 Series does support this feature. It is important to ensure that if any site running OTV is using a Cisco Nexus 7000 Series Switch for encapsulation, the encapsulated packets are not fragmented on any device.
  • OTV currently enforces switch-virtual-interface (SVI) separation for the VLANs being extended across the OTV link, meaning that OTV is usually in its own VDC. With the VDC license on the Cisco Nexus 7000 Series you have the flexibility to have SVIs in other VDCs and have a dedicated VDC for OTV functions.
  • OTV is the fault-domain isolation feature. With OTV, fault domains are actually isolated and separate from each other without the requirement of any additional configuration.
  • Cisco NX-OS Release 6.2(2), we now support selective unknown unicast flooding based on the MAC address. This feature is especially important for networks that use Microsoft’s Network Load Balancer.
  • OTV deployments use to extend Layer 2 between two or more physically separate sites.
  • OTV can not only extend Layer 2 between physically separate data centers, it also can split large data centers to help mitigate any failures or storms that might occur. Some Massively Scalable Data Centers (MSDCs) use OTV to logically separate their networks. With OTV, this design isolates failure domains so any loops or failures do not propagate to the whole data center. As with the traditional deployment model used in enterprises, the configuration is the same when using OTV to logically split a larger data center. 
  • Using Virtual Port Channels (vPCs) and OTV together provides an extra layer of resiliency and is thus recommended as a best practice. Because OTV is usually run in its own VDC, a vPC between the OTV and aggregation VDCs in a dual-homed scenario is the most common application. There are no constraints or special requirements when running both together.
  • OTV adds 42 bytes in the IP header packets, thus requiring a larger maximum transmission unit (MTU) for traffic to pass. It is also worth noting that OTV on the Cisco Nexus 7000 Series does not support fragmentation, so the larger MTU must be considered. 
.