From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer

A Few Points About VMware EVO SDDC Networking

938 Words. Plan about 6 minute(s) to read this.

EVO-SDDCA Packet Pushers listener that heard us chatting about VMware’s EVO SDDC solution raised a few concerns about the networking functionality in the current version of EVO SDDC. I was able to talk briefly with Krish Sivakumar, Director of Product Marketing, EVO SDDC & Ven Immani, Senior Technical Marketing Engineer, EVO SDDC at VMware to help clarify some of the issues.

Background

EVO SDDC comes with switches running Cumulus Linux, the network operating system from Cumulus Networks. The switches & networking functionality are meant to be transparent to the customer. That is to say, the EVO SDDC solution comes with two ToR switches per rack, and racks are interconnected with spine switches, all of which are provided by VMware as a part of the EVO SDDC purchase.

EVO SDDC does not expect that the switches will be configured by hand. Rather, the expectation is that they will be configured automatically. EVO SDDC users configure the compute, storage, and networking as a whole solution using a “single pane of glass” UI. There is no assumption that the existing network team will be inheriting a new Cumulus Linux-based network to learn how to configure.

Further, there is an assumption that the ToR and spine switches purchased as a part of the EVO SDDC solution are dedicated to EVO SDDC. VMware is not anticipating that customers will use spare ports that might be available to uplink other data center hosts.

That said, there are concerns with integrating the leaf-spine network that arrives as a part of EVO SDDC with the rest of the network. In addition, there are concerns about the sorts of applications that can run on the EVO SDDC platform considering that Cumulus Linux is the NOS of choice.

Raised concerns & VMware’s responses

Concern one: the EVO SDDC network is layer 2 only, and not layer 3 as currently implemented.

VMware’s response. As the product heads to market by the end of the year, it will support upstream L3 connectivity via BGP or OSPF in the ToR switches. The 40G spine layer is and will remain layer 2 only, as the spine’s job in the EVO SDDC environment is for east-west connectivity among EVO SDDC racks and not connectivity to the rest of the data center.

Customers who need to route between workloads in other parts of the data center would use their existing core. Also note that a common scenario for EVO SDDC customers will be inclusion of VMware NSX for network virtualization, meaning that workloads are assigned into their own VXLAN domain, and interdomain routing would happen as it does in NSX environments today, possibly via a gateway device.

I might dig into the nuts and bolts more deeply in a future post, but couldn’t get into too many specifics in our time-limited call today.

Concern two: there is no PIM support in the shipping version of Cumulus Linux. Therefore, how is IP multicast supported in EVO SDDC?

VMware’s response. Cumulus Linux supports IGMP snooping and L2 multicast. In other words, Cumulus Linux sees IGMP join requests, and forwards multicast traffic within the L2 domain to the requestors. This satisfies the requirement of VMware VSAN, which is part of the EVO SDDC solution. VSAN only needs L2 multicast, and not L3. As of today, VSAN nodes must exist in the same L2 domain.

However, if a customer application requires a true L3 multicast tree based on PIM, that is not handled by the EVO SDDC network. Check with VMware, understand your application requirements well, and do a technical review if you want to host a L3 multicast application requiring PIM on EVO SDDC.

Concern three: there doesn’t seem to be a network reference architecture for EVO SDDC. True or false?

VMware’s response. There is no reference network architecture documentation for EVO SDDC as yet. However, VMware is working actively on this. There are two families of documents coming. One will cover typical customer networks and explain how to integrate those networks with the EVO SDDC network. The other will involve virtual networking architectures using NSX, and will borrow heavily from NSX best practices that have been already established.

Separation of duties

A final consideration is that EVO SDDC blurs the lines of responsibility between networking and virtualization silos heavily. My take is that the networking team should not have to configure the EVO SDDC network. Let the virtualization team handle that via the single pane of glass. By that, I don’t mean they will actually have to configure the network. I mean that the network configuration will be a largely transparent element of the overall EVO SDDC solution.

Considering that the solution is supposed to be completely automated across compute, storage, and network tiers, let EVO SDDC do what it’s supposed to do, network folks. Assign IP blocks. Help integrate the edge of the solution with the core of the data center. Monitor utilization and other useful statistics. But don’t expect to have to get in there and manhandle switch port configurations, tweak protocols, etc. In theory, you don’t have to and shouldn’t even want to. Part of the EVO SDDC value proposition is that it will “just work.” (I heard you snicker. Stop it right now.)

Please tell me in the comments the impractical side of that point of view. Maybe it’s not realistic, although my gut says it is, much like virtual switches tend to be managed by virtualization folks. Provide guidance. Coordinate for connectivity to the legacy network. Set expectations. But don’t get in there with a wrench to make it go. If I’m wrong on this separation of duties notion, tell me what I’m overlooking.