From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer

The Ethernet Switching Landscape – Part 07 – Data Center Interconnect (DCI)

1,138 Words. Plan about 7 minute(s) to read this.

This is one of a multi-part series on the Ethernet switching landscape I wrote to support a 2-hour presentation I made at Interop Las Vegas 2014. Part 1 of this written series appeared on NetworkComputing.com. Search for the rest of this series.

One of the more specialized featured that appears in a limited number of Ethernet switches is Data Center Interconnect (DCI). DCI is not a protocol, but rather a general networking industry term referring to the bridging of layer 2 domains between geographically distinct data centers. Put more simply, with DCI, you can stretch a VLAN between DCs. The question is, “Can you do it safely?”

The word “safely” is a key, because it highlights an important feature of purpose-built DCI protocols: they attempt to control the issues inherent in stretching VLANs between multiple locations. If you’re not sure why this is a potential problem, consider the following.

  • Quite often, second, third, and fourth DCs are built by organizations to improve data resilience. In other words, one of the biggest points of building multiple data centers is to avoid a complete loss of service due to a natural catastrophe, long-lived power failure, equipment failure, or (critically for my point) network failure.
  • Well-designed networks protect network segments from each other to avoid fate sharing. Fate sharing is the idea that if one goes down, the others go down as well – they share the same fate.
  • Many, and perhaps even most, catastrophic network failures occur within a layer 2 domain, i.e. a VLAN. The failure is usually tied to a bridging loop creating a broadcast storm that makes traffic forwarding difficult or impossible.
  • In concept, stretching VLANs between DCs is a bad idea for this reason (and others). Assuming a DC is built to provide application resilience, stretching a VLAN into a second DC implies a potential for fate sharing – a bridging loop in one DC could impact the other DC as well.

DCI protocols like Cisco’s Overlay Transport Virtualization (OTV) and HP’s Ethernet Virtual Interconnect (EVI) recognize the issue of fate sharing, as well as issues related to sub-optimal traffic patterns (for example, a host in DC2 using a default gateway located in DC1) that can occur and deal with them. How they achieve this is beyond the scope of this article, but we covered OTV in detail at Packet Pushers if you’d like more information.

Ethernet_Switching_Landscape_DCI

Why is DCI an interesting technology?

DCI is interesting because it promotes application availability and making the best use of compute resources. Consider the following use-cases often cited by DCI vendors.

  • Long-distance workload mobility. In other words, take a virtual machine in DC1 doing X, and move it to DC2, probably using vMotion. That VM’s workload is no longer in DC1. It’s now in DC2. That’s helpful if DC1’s x86 or storage infrastructure is getting busy, and DC2 has some spare capacity. Moving workloads around helps guarantee application responsiveness for end-users. Put another way, the end user experience of a given application will be more consistent as workloads are evenly dispersed with within a data center and across data centers. This conversation also gets into GeoIP awareness and GSLB, but I think you get the idea.
  • Data center migration. In the past, I helped lead a DC migration where the old and new facilities were across the street from one another. We owned fiber between the two buildings. To migrate hosts from one data center to another, the design was very simple. Connect the old core switches to the new core switches via 802.1q. Carefully manage STP root bridges and HSRP primary and secondary nodes. Move hosts from the old DC to the new DC at will. Done. But what if the data center was migrating to a facility in another town or even across the country, separated by a L3 boundary? A simple 802.1q interconnect would not have worked. DCI protocols help in scenarios like this.

What devices support DCI?

In fairness, DCI is not purely an Ethernet switching technology. DCI is accomplished with some sort of overlay, and there’s nothing unique about Ethernet switches that require a DCI endpoint to be an Ethernet switch. So, in the case of Cisco’s OTV, it is supported on some Nexus switches as well as certain ASR routers. In HP’s case, EVI is supported on a limited number of switches.

The key is to understand what specific devices support the DCI protocol you wish to implement. Further, you must investigate the specific DCI capabilities of each platform. For instance, OTV does not offer exactly the same functionality on both Nexus and ASR platforms.

What other protocols are used for DCI?

I’ve already introduced OTV and EVI, but other protocols can fulfill the function of DCI. Please take note that I am not making a design recommendation here. I’m merely pointing out what some folks have used to achieve DCI in their environments. These are not apples-to-apples protocols, and should not be thought of as easily substituted one for the other.

  • Cisco FabricPath or TRILL. A TRILL L2 routing domain can extend L2 between data centers effectively. Cisco’s version of TRILL is FabricPath. I have heard of FabricPath specifically being used for DCI in certain designs. This is potentially attractive as a larger number of Cisco devices support FabricPath than do OTV.
  • VPLS. Virtual Private LAN Service is a way to extend an L2 domain between sites. VPLS runs over IP or MPLS, and is supported in a variety of platforms that enterprises would have access to, including the Cisco Catalyst 6500 series. VPLS is one of Juniper Networks’ favored solutions for DCI.
  • SPB. Shortest Path Bridging is a L2 interconnect technology similar to TRILL that evolved from a series of service-provider oriented protocols intended for geographic diversity and multi-tenancy, among other things. SPB is noted for ease of use, especially when compared to MPLS. I’m currently evaluating SPB as a way to build a service-provider like core/edge network without using MPLS.
  • L2TPv3. Layer 2 tunneling protocol uses pseudowires are a way to create a point-to-point tunnel that extends L2 between the endpoints. It is designed to encapsulate many different sorts of frames, including Ethernet. To the best of my knowledge, there is no intelligence in L2TPv3 that protects against fate-sharing.
  • EVPN. Up-and-coming is Ethernet VPN, which uses BGP to advertise MAC addresses across an L3 infrastructure. It’s a novel approach that’s gaining a lot of mindshare & implementation support in the industry. Packet Pushers has recorded a podcast on EVPN that is currently scheduled to publish on 14-July-2014 as Show 196.

For More Information