News Analysis: Big Cloud Fabric 2.5 Released


big-switch-logoBig Switch Networks has released version 2.5 of their Big Cloud Fabric SDN offering. Read the full press release here.

What’s Big Cloud Fabric?

BCF is an SDN-based IP fabric where you manage all of the individual switches as one “big switch.” In other words, you manage the fabric as a whole, and not individual switches.

Big Switch’s BigTap, a network visibility fabric that competes with the likes of Gigamon, has made inroads with service providers as it is aggressively priced and offers much of the same functionality as the premium-featured solutions. With BCF, Big Switch has expanded their SDN product offerings, offering a non-traditional way to design, build, and operate a data center fabric.

By “SDN” in this context, understand that I mean an application communicates network needs to a controller. The controller translates those needs into a network configuration, and provisions hardware with that configuration to meet the application’s needs. Humans interact with the application at an abstract level, and does not interact with individual network devices. Ergo, software is defining network behavior.

Big Cloud Fabric Components

Let’s say this notion of Big Cloud Fabric intrigues you. What, exactly, is this thing? There are several pieces that make up the system.

  • Controller. BCF includes an SDN controller. If you’re familiar with traditional networking, this device is (largely) your control plane.
  • Bare metal switches. Big Switch is a champion (of a growing nubmer in the marketplace) of disaggregated network hardware & software, aka open networking or whitebox switching. As networking consumers, we’re used to vendor switches coming with a vendor-mandated operating system. The whitebox movement allows network hardware to be bought independently of network software. The customer buys the NOS of their choice. In the case of BCF, Big Switch will recommend specific bare-metal switches that will run their NOS, a required component of BCF.
  • Switch Light OS. Switch Light is the Big Switch operating system that runs on the bare metal switches. The OS acts largely as an agent to load instructions from the controller into the switching silicon.
  • Switch Light vSwitch. If a hypervisor switch is a part of your Clos design that you wish to control with BCF, you’ve got a vSwitch option.
  • BCF application. This is the tool that operators interact with to manage the fabric. The application runs on the controller. Big Switch doesn’t differentiate between the controller and the application in their literature. That distinction (controller vs. application) is really a reflection of how I think about SDN. It might become more obvious why I think that way if you look into platform architectures like OpenDaylight.
  • Hooks. Connect BCF to OpenStack with Neutron or ML2, for example. Alternatively, manage BCF via RESTful API.

There are two flavors of Big Cloud Fabric. The P-Clos Edition is for an exclusively physical leaf-spine topology. The Unified P+V Clos Edition adds the vSwitch to the mix.

What do you get with BCF?

  • Multi-tenancy. I got to spend a little time with BCF a few months ago, and the notion of tenancy is very much there. If you think you don’t care about multi-tenancy, you probably do and just haven’t realized it yet. In the new world of security where breach prevention is impossible and recovery the emphasis, I suggest that multi-tenancy is a way forward that helps contain the damage wrought by the inevitable breach. Expect more products from vendors in 2015 that address this model in various ways. Believe me, you want to model a data center this way – you can make anything a “tenant.” Don’t think about tenants just in terms of “customers on a cloud infrastructure.”
  • Scale out architecture. I’ve written about scale out vs. scale up before. In the BCF design, start small if you like, then go sideways to build it bigger — scale out. More leaf switches. Grow as you go. Not that scale-out is unique to BCF. Leaf-spine in general does well for scale out, but it’s a useful point to know as you think about growing a data center, especially when looking at a new product. You want to be able to start small, test the waters, see how it works out. Then, if you like it, grow without having to start over again. In the BCF scenario, you would be able to do just that. Start small. Just a pod if you want. Then if the solution makes sense for the way you do IT operations, you get some more BCF leaf switches for ToR, and plumb them to the BCF spine. And then keep doing that. (Of course, spines run out of ports to uplink leaves at some point, in which case, you might be looking at a multi-tier design like this to get the host-facing port density and oversubscription ratio desired. Check with Big Switch to see exactly what the scale limitations are what topologies are supported in BCF 2.5.)
  • Single point of management. An obvious point considering what’s already been said here, but let’s not minimize this benefit. You don’t manage a bunch of switches. You manage the fabric as a whole. And you do it with the BCF controller application. The idea is operational efficiency.
  • Fabric LAG (aka MLAG). We all know and love (hate? have a cautiously optimistic relationship with?) various multi-chassis link aggregation protocols found in Cisco, Avaya, Arista, Juniper, and other vendor products. (Because a standard implementation across vendors would certainly be a terrible idea.)  The big idea is that a single host can be dual-homed at L2 using LACP. Yep, you can do that with BCF.
  • Service chaining. Need to stick a firewall in the path of a traffic flow? How about an IDS? Load balancer? Sure. You can do that with BCF, and you don’t have to rely on physical plumbing or L3 gateways (or, $deity help us, manual policy-based routing) to make it happen. The BCF controller will handle the chaining. Last I knew, service chaining was only supported at L3, with L2 to come in the future. Someone correct me if I’m wrong here.
  • Your choice of configuration methods. Use CLI, GUI, or REST API. I don’t know how wonderful the API is or is not – haven’t used it. But I can say that REST APIs in general are easy to consume. The question is more whether or not the methods offered let you do what you want without having to jump through 20 sets of GETs and POSTs and subsequent data munging. Anyone with experience using Big Switch’s APIs feel free to comment and/or share some code examples.
  • Resilient fabric operations. If the controller is gone for whatever reason, the fabric will still function. No, you can’t do much in the way of configuration, but at least the fabric is forwarding traffic while the controller issue gets sorted out.

What’s new in Big Cloud Fabric 2.5?

  • VMware vSphere support. BCF will talk to vCenter, figure out what networks are required, and create them automatically. In addition, BCF will integrate data learned from vCenter into the controller interface. In other words, go to the BCF application, see VMware stuff. Other hypervisor support includes Hyper-V, KVM, and XenServer.
  • SwitchLight OS runs on Dell Open Network switches. Currently, that means the S6000-ON, and eventually the S4000-ON as well. Note that Dell is a Big Switch partner, and would be delighted to sell you Big Cloud Fabric. Big Switch tells me that a major effort went into training Dell staff to deliver the Big Switch solution. So bug your Dell rep – either they should know about Big Switch technology, or should be able to connect you to someone within the Dell organization that does.
  • Fabric analytics. Big Switch says “best in class fabric analytics,” one of those ambiguous marketing claims that doesn’t really mean much. But let’s assume that at the very least, it’s usable and provides useful information. There’s a bigger point, though. Since the Big Switch BCF controller knows an awful lot about what’s going on in the fabric, a solid analytics engine seems like an obvious use of that data. And behold, that’s what’s happened. Expect more of this from SDN vendors. Think Purview from Extreme.
  • More clouds. OpenStack is supported, as I mentioned above. As of 2.5, add CloudStack and Citrix CloudPlatform to the list.

What’s this all mean? Why do you care?

I want networking to become easy to consume over the next few years. I am sick to death of building data centers by hand and managing devices one at at time. I’m weary of overpriced management systems that rely on SNMP and screen scraping to do CLI-oriented configuration as a way to save me time. Okay, that’s not a new complaint from me, and some have accused me of focusing on the wrong problem. And some of you have written your own automation systems because you got tired of waiting for the industry to figure it out. Fair enough.

But let’s think about what BCF and similar (yet quite different) tools in this space like Cisco ACI and VMware NSX are bringing to the table. Centralized management. Build a policy once, push to many devices, and abstract the underlying infrastructure.

Yes, these products accomplish these goals with different technical focuses, capabilities, and ways. To be sure. My point is that these are all, on some level, illustrative of a very different way to think about networking. To us, the network operators of the world, falls the responsibility to wrap our brains around these technologies and start offering vendors feedback on them. Are they good? Bad? Do they make our lives easier? Is the automation working the way we need? Can we even articulate our needs more ably than screaming into our blogs and the Twitterverse?

I think we can. I know Big Switch is very engaged and seeking user feedback. So give it to them. Sign up for Big Switch Labs. Yeah, it’s gated – not a free-for-all demo. But it’s the kind of thing that lets you get your brain wrapped around new networking without having to deal with the hassle of being shipped a box, etc.

I’d be delighted to get your feedback on BCF here as well. Leave a comment below. I know a few of the folks at BSN, and I believe I can get them to engage you here if you have questions/concerns/comments.

About the author

Ethan Banks

Most people know me because I write & podcast about IT on the Packet Pushers network. I also co-authored "Computer Networks Problems & Solutions" with Russ White.

Find out more on my about page.



Most people know me because I write & podcast about IT on the Packet Pushers network. I also co-authored "Computer Networks Problems & Solutions" with Russ White.

Find out more on my about page.

Subscribe via Email

Receive complete, ad-free posts in your inbox as I publish them here.