Network Services Headers (NSH): Creating a Service Plane for Cloud Networks (Cisco)
Cisco has developed Network Services Headers (NSH), a new service chaining protocol that is rapidly gaining acceptance in the industry. Based on lessons learned in earlier versions of vPath, and realizing that NSH would only succeed with broad acceptance from a wide variety of network services and security vendors, Cisco has quickly moved to propose NSH as a standard and has brought a multi-vendor proposal to the IETF. NSH is added to network traffic, in the packet header, to create a dedicated services plane that is independent of the underlying transport protocol. In general, NSH describes a sequence of service nodes that the packet must be routed to prior to reaching the destination address and adds this metadata information about the packet and/or service chain to an IP packet.
Another SDN-related protocol pops up (although it’s been kicking around for about a year now), this one aimed at service chaining. Service chaining is the idea of moving a traffic flow through a series of services, the most common examples I hear cited being firewalls, load-balancers, and intrusion detection devices. The terms “service chaining” and “policy” are often used together, as service applications in a conceptual chain are often what are used to be certain the traffic flow conforms to given policy. Okay. So, what’s NSH all about? NSH is addressing the problem of how to easily route traffic between services in the chain.
Let’s take a step back. Today, if we network engineering folks need to apply, say, stateful packet inspection and filtering to traffic, we make sure that the traffic is routed through a firewall. The simplest way to do this is to make the firewall the default gateway for a segment. A layer 2 “bump in the wire” deployment is increasingly common. Policy-based routing (oops, just threw up on my keyboard a little) is yet another strategy to shove traffic through a firewall. Or whatever appliance we need to shove it through. The problem is that getting that traffic to flow through the device we need it to flow through is a pain in the rear end that constrains the network design and renders network topology changes hard. Oh, you want that traffic to flow through multiple services? You ever built a DMZ-positioned load-balancer that must gateway through a firewall for policy enforcement? That’s a lot of 802.1q subinterfaces. This is not the scale you’re looking for.
Service chaining is implemented in a few different ways in the SDN world. One way is to tunnel traffic between services until it reaches the end of the chain. I’ve seen this done with VXLAN. I’ve also seen a demo from one of the OpenDaylight projects that used LISP to tunnel between services. Okay…this can all work, and if there’s an SDN controller doing the heavy lifting of chaining the services together, it’s not so hard on the network engineering team. Anything’s better than a complex PBR scenario. The catch? Most often, these were all NFV services – virtual instances chained together in the context of a hypervisor and vSwitch, where tunneling from one vSwitch to another can be counted on. What if we want to chain between physical devices jacked into the network fabric anywhere that has enough network pipe and available RUs? (I.e. NOT be worried about physical network placement like we do today.) Hmm. Hard.
This is one of the needs NSH is supposed to address. As I understand it thus far, NSH is what I like to think of as a packet shim, intended to work in a variety of data center deployments. To quote the article again, “NSH is inserted between the original packet or frame and any outer network transport encapsulation such as MPLS, VXLAN, GRE, etc. As NSH is transport agnostic, it can be carried by many widely deployed transport protocols.” And what data is inside the shim? “NSH describes a sequence of service nodes that the packet must be routed to prior to reaching the destination address and adds this metadata information about the packet and/or service chain to an IP packet.”
In other words, go here. Now go here. Now go here. Made it this far? Okay, then now you can be delivered to your final destination.
“But what about scale?” you ask, knowing a shiny-new NSH probably isn’t going to work with the ASICs you’ve got. Good question, and one I asked of Cisco. The answer back from Paul Quinn follows.
We are planning on widespread NSH support, across Cisco’s software and hardware product lines, as well as in open-source software. We are able to rapidly innovate and develop using software platforms, and in many deployments those software platforms meet the needs of the network operator. So, as you might expect, we support NSH today in software. Of course, adding ASIC support will allow us to take advantage of the scale and performance offered by dedicated hardware. To that end, we have a roadmap for NSH support in many of our ASIC families. There’s also a middle ground: some form of programmable hardware such as a FPGA. Those devices can often support NSH via software update.
So, yep – software today, hardware ASICs in the future, and FPGAs maybe soon for those devices with FPGAs (hint, you probably don’t own FPGAs in your switches).
Lots more reading you can do on NSH here (the blog post I quoted from). Also check out the latest IETF draft on NSH, which includes authors from Cisco, Broadcom, Intel, Microsoft, Citrix, Rackspace, and Red Hat.