From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer
broadcom_200x200

Does Switching Hardware Matter In An SDN World?

1,281 Words. Plan about 8 minute(s) to read this.

I wrote a longish piece called What To Look For In An SDN Controller that Network World published on 2-Dec-2013. It was one of those weird moments when I started seeing mentions on Twitter referencing the article, and I couldn’t remember what the piece was all about. A number of weeks went by between writing the piece and the piece actually going live. Which…I don’t mind. It’s just a little odd to see the piece again after more or less forgetting that I’d written it. Media is an interesting business, and I’m still learning how things work.

That said, after re-reading the article to recall what my fingers spit into the keyboard, I think it’s still valid. When I wrote it, ACI and NSX hadn’t quite taken the world by storm yet, and I might have made a few additional comments. But overall, the article is, I think, relevant.

One comment came up on Twitter regarding the piece, which was whether or not I was implying in the article that switches don’t matter all that much in an SDN architecture. My opinion is that switching hardware matters a lot in many SDN architectures, but admittedly not all. Some thoughts on this.

1. Not all silicon is created equal. Switches use special-purpose silicon (ASICs) to move packets through them. Not all ASICs have the same capabilities. Not all ASICs can handle the same amount of complex processing at wire speed. This is why OpenFlow implementation in Ethernet switches is a bit of a crap shoot right now. While many switches are OF1.0 compliant and a few are OF1.3 compliant, that compliance gives no indication of how quickly those switches are able to handle specific OF functions. That might or might not be important, just depending on what your network’s specific needs are. As the Infrastructure track chair for Interop Las Vegas 2014, I’ve asked Curt Beckmann, member of the ONF Chipmakers’ Advisory Board, to address this, among other things, in his Interop session that will be titled “How SDN-Ready Is Your Network Infrastructure?

2. Complex SDN applications that are doing hop-by-hop flow manipulation through a data center will require scale. By this, I mean that data flows traverse multiple hardware devices that sit between source and destination. If you want to manage that flow (i.e. treat it in a way that’s something other than simply forwarding), then ASICs will need to be able to handle that through the data center.

Assuming a clos tree architecture where certain network switches are doing very heavy lifting (not a foregone conclusion as data center topologies evolve, but still typical), then scaling into the many thousands and possibly millions of flows in hardware is key. Few switches are designed for this today. While I haven’t seen a detailed switching architecture document of the Cisco Nexus 9000 series that explains the “merchant plus” custom Cisco silicon in detail, I am fairly confident that Cisco is going after the marketplace “loaded for bear” on exactly this point. The total ACI solution is supposed to scale to 1M endpoints, if I understand it correctly.

Enterasys (just bought by Extreme Networks) has been playing in this space for years with their CoreFlow ASIC. CoreFlow2 scales up to a claimed 64M flows – read their brief. Sort of astonishes me no one talks much about Enterasys, since they have been doing for a while with software and hardware what others are talking about like it’s new. Enterasys’s endpoint management capabilities are beyond anything else I’ve seen on the market today.

3. Some of the most practical SDN applications I’ve seen so far are delivered on a combination of hardware and software. That’s HP, NEC, Plexxi, and Enterasys off the top of my head. And Cisco is clearly heading down that road with onePK & ACI, despite ACI being criticized with comments like “hardware defined networking”. Sigh. That list is not meant to rule out other innovative SDN platforms delivered in switch-agnostic software like VMware NSX, Nuage Networks, Anuta Networks, and Embrane. And fill in the blank here with your favorite SDN tool I didn’t happen to mention. But in my opinion, there’s a difference between virtualizing networks or providing orchestration platforms vs. delivering full-blown software defined networking. These are all facets of the same jewel.

4. Some applications with special requirements are leveraging custom silicon in the form of FPGAs, such as what’s found in the Arista 7124FX. Quote below from the official Arista product page.

The Arista 7124FX provides 24-ports of wirespeed and ultra-low latency 10Gb Ethernet using the flexible SFP+ package, 8 of those ports route through a dedicated and fully-programmable FPGA where customers can load their own custom applications. The FPGA supports 160Gbps of throughput and offers over 6 million programmable logic gates to run high-performance and mission-critical applications in the network.

The Arista 7124FX is most commonly used in financial applications where you want the application to be as close as possible to the data source or inline with the data stream. This enables increased competitive advantage when coupled with the deterministic and ultra-low latency forwarding path. Other applications include financial services exchanges, government, defense, and the high performance computing world.

5. If all you need to do is virtualize your network at the hypervisor edge, you have a different problem to solve than many. I’m well aware of the argument that all Ethernet switches need to be are dumb frame forwarders with L3 intelligence that deliver traffic between the virtual switches resident in the hypervisor edge. If that’s all your network needs to do, then I agree that (to a point) the switch you use to deliver those frames doesn’t need to be exciting. You’re going to be doing most services in the hypervisor like firewalling and probably load-balancing and then encapsulating that traffic into VXLAN or something similar before shoving the packet out the vSwitch and into the physical network. There are a number of very large networks that need pretty much this and not a lot else, so a switch needs to have some basic forwarding intelligence, and maybe that’s all you’ll care about. Enough space for learn where the VTEPs are? Got OSPF? Or maybe a L2 fabric? Well then, forward away.

But virtualized networks delivering IaaS or other apps requiring multi-tenancy isn’t what most networks are doing today. Most networks are NOT provider networks. They are much smaller networks servicing enterprises, EDUs, non-profits, etc. And those networks are hard to manage, with a wide variety of service delivery and security requirements depending on the endpoint, physical location, and nature of the data being transported. That’s a whole other world I think cloud network operators overlook sometimes, even though enterprise & campus is by far the larger customer base in the wide world of networking. Admittedly, I carry the enterprise networking torch, as that’s where I’ve spent most of my career.

Could enterprise networks adopt a model like VMware’s NSX? I see multi-tenancy being applicable to enterprise networks, yes. I can easily make that use-case. But operationally, that would be a tough change to make. I think it’s more likely that enterprises will adopt SDN applications that can manage flows across their infrastructure end-to-end rather than re-architecting their networks and mindsets to map to a container (network virtualization) model. And if I’m right, that means silicon is going to keep on mattering.

All of that to underscore the point that I believe silicon still matters, and will continue to matter. Will it matter to everyone? No. But vendors will continue to innovate and differentiate based on switching hardware. For better or worse, I think we can count on SDN solutions continuing to emerge that are in some way hardware dependent.