Datacenter : Transparent Inter-Connection of Lots of Links (TRILL)

Today we are going to talk about one of the protocol used in the datacenter environment and this protocol is named as TRILL. TRILL stands for Transparent Inter-Connection of lots of hyperlinks and is a generation that addresses the same necessities because the cloth route and has almost the identical blessings as fabric path.

The necessities and advantages of fabric course have been given within the fabric direction section of this chapter. The chapter on TRILL discusses all the boundaries of cutting-edge Layer 2 networking in element and how TRILL addresses them. TRILL, as of this writing, is an IETF well known. 

With the adjustments going on in the statistics middle environments, the modern-day STP has lots of dangers as mentioned right here:

Inefficient usage of hyperlinks: To avoid loops in a Layer 2 community, the STP ensures that there’s best one direction from a source to a destination. To reap this, the various hyperlinks in a transfer are installed a blocked state in order that data visitors doesn’t waft thru the links. 

With the speedy boom in server-to-server conversation, called east-west visitors, blocking off a few of the hyperlinks can purpose congestion in the hyperlinks which are in an unblocked nation. 

Shutting down or blocking the links in a switch reduces the value of a transfer that has the potential to host many ports able to wearing excessive-bandwidth site visitors. 

A Layer 3-like behavior is required, in which all of the hyperlinks in a switch can be used and that offers a loop-free mechanism.

long term to converge: STP isn't always designed for typologies which includes MSDC. The time taken for all the nodes in a network to go to a steady state is excessive. site visitors is disrupted till the consistent country is reached. 

Fig 1.1- TRILL protocol

whenever there may be a change in the topology due to a hyperlink going up or down or whilst new nodes are added or removed, spanning tree recalculation consequences in traffic disruption. honestly, a loop prevention mechanism is needed that could scale properly in an MSDC surroundings. once more, a Layer3 conduct is needed, wherein the routing protocol takes care of avoiding loops and also can scale to a large range of nodes.

Scaling the MAC desk: With the emergence of virtual machines, with every VM assigned a MAC deal with, the dimensions of the Layer 2 table can develop by a massive margin in particular at the middle of the records center network that learns the MAC cope with of all the VMs. 

The price of the hardware may additionally growth with the increase inside the size of the hardware Layer 2 table. It’s most efficient to have a clean separation of the overlay network and the end host get right of entry to network such that the center network could have a Layer2 desk whose length may be better quantified by way of the range of switches in the overlay community than seeking to quantify the variety of cease host VMs in the complete community which won't be a trivial undertaking. 

If the size of the Layer 2 table at the middle is less, it is able to bring about a few entries no longer being found out. this may result in a Layer 2 lookup leave out, that could result in a flood within the community. Flooding can devour pointless community bandwidth and can devour the CPU assets of the server due to the fact the server may also acquire the flood frames. genuinely, a tunneling protocol along with MAC-in-MAC is needed so that each one the middle switches do now not need to analyze all the stop host MAC addresses.

TRILL Requirement
Control protocol: TRILL uses Layer 2 IS-IS as its control protocol. The idea is to take the advantages of a Layer 3 routing protocol and at the same time maintain the simplicity of a Layer 2 network. Every node in a TRILL network is referred to as RBridge, aka Router-Bridge. Every R Bridge is identified by its nickname. In other words, a nickname is the routable entity in a TRILL network, just like an IP address in an IP network. 

Unlike Layer 3, there are no separate protocols for uni cast and multicast. The Layer 2-IS-IS protocol takes care of populating the routing table for uni cast traffic, thereby ensuring multiple shortest equal cost paths (ECMPs) for all the RBridges and also creating trees for multicast traffic. Needless to say, Layer 2 IS-IS also ensures loop-free routing. But at the same time, TRILL inherits the TTL field from the Layer 3 world to ensure traffic due to intermittent loops eventually expires out.

Preserve plug-and-play features of classical Ethernet: One of the main advantages of a Layer 2 network is its plug-and-play nature, and the administrator is relieved of heavy configuration unlike in a Layer 3 network. TRILL achieves this with its Dynamic Resource Allocation Protocol (DRAP), where every node derives its own nickname and the protocol ensures there’s no duplicity. The configuration requirement of TRILL is minimal. 

Layer 2 table scaling: TRILL uses a MAC-in-MAC encapsulation, where the traffic from the host is encapsulated by the ingress RBridge. The core RBridges see only the outer MAC header, which has the MAC address of the source and destination RBridge. Consequently, the MAC table at the core RBridges will not be polluted with all the end host MAC addresses. 

TRILL Frame Format
The ingress RBridge encapsulates the original Layer 2 frame with a new source and destination MAC, which are the MAC addresses of the source RBridge and the next-hop RBridge respectively; a TRILL Header, which has the Ingress and Egress nickname that identifies the source and destination RBridge, respectively; and the original Layer 2 frame with a new CRC. The incoming 802.1q or q-in-q tag needs to be preserved in the inner header.

TRILL Data Plane Operation
To describe the high-level data path operation, By now you would have already figured out that the forwarding is similar to Fabric Path. To describe the data path from Host 1 to Host 2, assume that all the control plane information has already been learned. Host 1 and Host 2 already know about each others MAC addresses. The basic steps involve the encapsulation of the frame with the TRILL header at the ingress RBridge, followed by switching using the TRILL header in the TRILL network and then finally de-capsulation of the frame at the egress RBridge. The following steps provide more details on this operation.

Host 1 uses its MAC address of A as the source MAC (SMAC) and sends a classical Ethernet frame, which is destined to Host 2 with a destination MAC (DMAC) address of B. On receiving this frame, the ingress RBridge (Nickname 10) does a (VLAN, DMAC) lookup. The MAC lookup points to the destination (Nickname 20) as the egress RBridge for this Ethernet frame. 

So the ingress switch encapsulates this frame using the TRILL header for forwarding the frame to the TRILL core port. The source and destination nicknames are set as 10 and 20, respectively. The outer DMAC is the MAC address of the next-hop RBridge, and the outer SMAC is the MAC address of the source RBridge. 

The core RBridge (Nickname 30 in this example) forwards the frame based on the best path to the destination RBridge Nickname 20. In this case there are two paths to reach the egress RBridge with Nickname 20, but the best path is a directly connected link; therefore, the packet is forwarded over the directly connected interface to the switch with Nickname 20. 

The TTL is decremented, and the outer SMAC and DMAC are rewritten with the MAC address of this RBridge and RBridge 20’s MAC address. Just like regular IP routing, the TRILL header is not modified, but at each hop the router DMAC and SMAC are rewritten along with a TTL decrements. The destination RBridge 20 receives this frame. Because the incoming frame is destined to this RBridge, it removes the outer MAC and the TRILL header. It then forwards the frame to Host 2 based on the inner (DMAC and VLAN) lookup.