Latest

Aruba SD-WAN: Dynamic Path Control

 Aruba SD-WAN: Dynamic Path Control



Today let’s take a deeper view of the Dynamic Path Control features. In our last article, we covered the Path Conditioning Feature of the Aruba EdgeConnect solution. Path Conditioning feature enables carrier-grade reliability over commodity links (Public Internet). Today we are going to cover Dynamic Path Control in detail.

The below picture will give you an idea of what are we talking about today and where this feature fits in the Traffic Handling techniques available in the Aruba SD-WAN solution.  


Figure 1: Dynamic Path Control

Dynamic Path control is a method to dynamically select the appropriate underlay links for Business Intent Overlay. It is effectively used when there are multiple links are available to the destination. It is utilized per overlay using the Link bonding policy.

If I take the example of Google Maps there are multiple streets or roads to reach to the destination. Google Maps utilizes the traffic conditions and distance to calculate the shortest path for you to drive on. In the same way, there are multiple underlay links in the network, Dynamic Path Control and Link Bonding Policies are engines that make up the shortest or optimal path to be used by the Business Intent Overlay for all traffic that it identifies. 

When you have multiple links available, Dynamic Path Control is to select the appropriate links associated with an overlay on per-packet bases. Link Bonding policy is part of Dynamic Path Control that affects the FEC ratio and the failover times.

In the below figure, there is a “RealTime” Business Intent Overlay to group all voice, video and signal applications.


Figure 2: DPC & Link Bonding Policy for RealTime Overlay

As per the configuration above, RealTime overlay utilizes 2x MPLS, 2x Internet links as primary links. Out of these four links, which one to utilize for actual overlay traffic forwarding is decided by Dynamic Path Control. Now let’s say it selects MPLS1 & MPLS2 as the best paths to forward traffic. Now, what should be the FEC ratio, and failover time is something controlled by the Link Bonding Policy. This setting is per overlay – I mean differed overlay “BulkData” may use different links and link bonding policy definitions when creating an overlay.

As per Figure 2, there are 4 types of Link bonding policies that can be attached to BIOs.

  • High Availability – should be used with the most critical applications. This requires a minimum of two primary links configured with a 1:1 FEC ratio. Original data is transferred over the best SLA link and the parity packets are sent over another primary link. Here we get 50% bandwidth efficiency since the second primary link is carrying the duplicate packets. This is the most resiliency configuration setting but with reduced bandwidth.
  • High Quality – it is also another resiliency configuration setting for critical applications but here we get more available bandwidth. It supports adaptive FEC, meaning if there is no loss, no bandwidth is used for FEC data. However, if there is loss, FEC can consume up to 20% of bandwidth to make communication reliable.
  • High Throughput – it is the right balance between throughput and resiliency. It also requires a minimum of two primary links and can load balance traffic across them.
  • High Efficiency – it is the same as high throughput without no FEC. This setting is good when you are using primary private MPLS-like links with no loss. 

We talked about the link bonding policy requiring a minimum of two primary links for resiliency where traffic can be load balanced & a reliable environment can be provided to business-critical applications. It is suggested to have both primary links with the same loss/latency characteristics. The reason is if there is a loss over the link, you will be scaled down to the slower link.

That was all I had today, hope you find it informative!