From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb

ONUG Spring 2015 Live Blog – Day 2

2,817 Words. Plan about 19 minute(s) to read this.

All times are EST/NYC.

May 14, 2015

9:27am – Hello, and good morning.

This morning I’m at a Tech Field Day presentation with NEC that runs for about an hour. I’ll head over to the main ONUG hall a bit later to catch a morning session if I can, but certainly some afternoon sessions.

9:32 – NEC ONUG Test Results Presentation to Tech Field Day

NEC has received the NSV validation from ONUG. Also announcing pay as you grow pricing with the Programmable Flow Controller (PFC) Starter Pack. Starts at $3K. Grow with 5-pack switch licenses.

PFC 6.1 release includes…

  • Support for Microsoft NVGRE
  • Integration with VMware vRealize Cloud Management Platform
  • Openstack Juno support
  • More goodness in the GUI (new visualizations and enhancements)

ONUG as a group cares about network service virtualization. (If you read this blog, think NFV.) Why do they care? Trying to drop costs of L4-7, wanting to easily chain services together, reducing the time to configure fancy network appliances. Solutions to these issues will address these concerns. ONUG has a series of working groups, one of which is the NSV working group — the NEC orchestration solution was tested by ONUG in partnership with Ixia.

The testing that was done by ONUG/Ixia was to demonstrate that specific traffic flows could be chained through various services across the network topology. NEC controls the network fabric using their network virtualization application (i.e. their Virtual Tenant Network application that has been an offering in ProgrammableFlow for at least a couple of years now). The big idea is that an operator interacts with an application on the controller. The controller handles provisioning of the network fabric as a whole, meaning that operators don’t have to program individual switches.

NEC offers an API so that people that want to program custom applications or services into their network environment, they can. There were a couple of examples cited.

NEC defines key network service virtualization features they can deliver as…

  • Network Virtualization (network abstraction, location free, reliability)
  • vBridge and vRouter
  • Flow Filter (in specific locations, or applied globally)
  • API (REST, allows for per-flow service management)
  • Visibility (topology discovery, end-to-end flow visibility)

Network Service Virtualization Test Results

You can download the tests and a related whitepaper on NSV from here.

NEC passed 8 out of 10 of the tests defined by the ONUG NSV working group. Again, NEC was testing their orchestration application here, using ProgrammableFlow. The focus of their ability is “on demand orchestration of flows for service insertion,” and being able to bring in 3rd party appliances into the service chain. NEC has a lot of different partners here whose systems they work with, including Radware, Palo Alto Networks, F5 and Riverbed.

NEC speaker has shifted from Don Clark to Pierre Lynch who is going to get into the testing methodology for NSV held at ONUG. In other words, was the testing a meaningful set of requirements? How was the testing done, etc.?

Pierre points out that the testing was done with an Ixia box running IxLoad “stateful subscriber and server emulation tool for stateful L4-7 testing,” sending traffic flows into the IP network. Each requirement has 1 test case, resulting in a total of 10 tests. Each test had a structure that described exactly what the point of the test was, what the expected results would be (not the specific results, but in general what sorts of answers would be in the result set for that result set to be useful and a fair comparison among participants.)

Most of traffic mix was HTTP between 100 clients and servers. Thus, this set of tests was to prove functionality and not scale.

NSV Testing Demo

Presentation hand off to Jenny Oshima. Produce demo underway. I like this GUI – shows physical topology in a clean way, but not too much data on the top side. You can drill in for more info about how links are connected, where hosts reside in the fabric, etc. There will be a Tech Field Day video of this session the future that I will try to remember to embed.

I find it notable that in the testbed, not all switches in the fabric are NEC. They also list a NoviFlow switch as well as a Dell S4810. (My two cents, this is yet another harbinger of OpenFlow as the great equalizer. Assuming broader support for OpenFlow across the industry over time, programming switches is not tied to a vertically integrated stack of controller, NOS, and switch hardware. A controller that speaks OpenFlow southbound can populate flow entries into any switch that supports OpenFlow in the expected way. We’re a long way from OpenFlow ubiquity in the industry, but interoperability and cross-vendor partnerships tied to OpenFlow keep showing up. Recall that NEC partnered with Alcatel Lucent Enterprise recently.)

NEC demos a link failure in an ECMP leaf-spine fabric. The big idea here is that the controller will program certain flows to go across one link, and certain flows across another, but using a “port group” concept. The controller will also pre-program into the switch hardware an alternate “fast failover” path within a port group. That means that if a link goes down in the ECMP link fabric, there’s a microsecond failover to the alternate path. The controller does not have to react to the failure and then program a new forwarding path. The alternate path is already programmed in the switch hardware, and the switch can react immediately without punting to the controller. That pre-programming is done without doubling TCAM entry requirements, as the entries for forwarding are assigned to consolidated groups of flows. In that architecture, silicon resource impact is minimal.

1:54pm – Break before 2 afternoon sessions

I’ll be live blogging the final two sessions of ONUG today.

  1. 2:00pm: Open Networking and Storage Investment Implications Panel
  2. 2:45pm: Town Hall Meeting — Will the DevOps Model Deliver in the Enterprise?

2:10pm – Open Networking and Storage Investment Implications Panel

Panel takes the stage. Promise is that the panelists are “not shy or short on opinion.” Introductions made — moderator from The Juda Group, panelists from Battery Ventures, Cowen & Company, and AO Asset Management.

Question 1: There’s a move towards open networking that’s happening at it’s own pace. Where do you think we are in the open networking life cycle, impact on stocks, etc.?

Panelist opinions:

  1. Most investors believe that networking died last year, and that’s the filter through which the investment community views traditional networking vendors.
  2. VMware bought Nicira — that $1.2B purchase changed the perspective of the investment community. Everything changed. Cisco is not out of the game yet, stock is rebounding from a year ago when the SDN hype cycle was climbing. There’s a phase where everyone thinks the world is changing, but then things will settle back down to a more realistic pattern. Networking incumbents aren’t going to be doing that badly necessarily. L4-7 vendors are going to be impacted over time, and aren’t going to do well selling appliances anymore.
  3. Companies are risk averse, and new technology adoption is risky. Therefore, it will be a long(er) adoption cycle. Also, recognize that these markets are huge — the numbers don’t shift over night. These are early days.

Question 2: When looking at history (server virtualization where VMware won and the server guys lost), how does that inform what we’re seeing in networking today?

  1. End users are being challenged to reinvent themselves to meet the demand of the developer communities they support. Startups are gaining traction in this area. These customers are early adopters of technology because they have to be, and startups have some inroads there. “Leading the charge.” There are folks that are doing greenfield infrastructures, where the startups have an easier way in, not fighting the incumbent engineer with a CCIE number tattooed on his forehead.
  2. There are points of penetration that smaller companies can enter the market, and investors are looking for these companies that could experience rapid growth.
  3. There used to be just VMware in the virtualization space, but now there’s many companies offering many different solutions. Same thing is happening in networking. There will be 10+ years before we’ve found meaning in all of the changes and know what compute looks like.
  4. People are looking for better ways to manage their infrastructure. It’s not necessarily about better utilization. It’s about easier service provisioning.

Question 3: How does SD-WAN and other SDN technologies impact investments in existing L4-7 companies?

  1. We haven’t seen the needle moving yet to impact the L4-7 companies. Changes are incremental. F5 is impacted by NFV more than SDN. Competitors have not gained enough traction to change the paradigm and swing the stock.
  2. Very few investors are even aware of SD-WAN, and it’s not impacting their decision processes yet. SD-WAN is distinct from the SD-DC business case in that SD-WAN offers a very strong ROI very quickly.

Question 4: Where are we with M&A, and every investor suddenly needing an SDN company in their portfolio?

  1. It’s a good time to be a startup that’s growing in the SDN space. There’s a steady drumbeat of new companies trying to get going.
  2. M&A is rough in the telecom space. 2-3 companies succeed & go public. 2-3 companies get acquired. And the rest likely fail. That’s history. We’ll see in this market how this looks, because there’s a bit of re-segmenting going on.

Question 5: What’s a good market for a new investor to get involved in?

  1. Easy money is forcing its way into the stock market, and the market is being driven higher. The trick is to find those companies that are going to grow and be consistently profitable. There is skepticism on the part of investors, though. There will always be an appetite for investments that fit the growth/profit criteria.
  2. Once every 3-5 years, we get another round of consolidations. Massive consolidations. But also 3-5 years cycle of IPOs. Sometime in the next 12-36 months, there is likely to be another round of IPOs. The next round could be far better than the last round.

Question 6: What is the one thing you think global investors don’t understand about open networking and it’s impact to the global market? And then M&A (or other) predictions?

  1. Investors have no idea how hard the level of complexity is to make open networking happen. It’s not black and white. Most investors don’t grasp just how challenging of a shift this is, or how long it’s going to take. Prediction: SDN & NFV & cloud won’t be that much more prominent in real life. Not too much more substantial then today. Happening slowly.
  2. Prediction: If you believe the revenue ramp up projections, it’s possible we could see an IPO in the next 12 months, although that’s a tight timeframe. But there’s no wisdom in going public just to go public. Very stressful, lots of scrutiny. Expectation is that there isn’t going to be any IPOs probably. Also, VMware/Nicira isn’t going to make much progress with NSX beyond the security/microsegmentation story.
  3. Misconception is that investors think that SDN and automation are just for webscale, which is not true. In reality, smaller companies that can afford to make the operating changes can take advantage and tend to be the adopters.

2:55pm – Town Hall Debate: Will the DevOps Model Deliver in the Enterprise?

Folks from Facebook, Cisco, Ansible, Nuage Networks, and vArmour on the platform.

Question 1: We’ve been operating infrastructure and networks just fine for 20 years. What’s changed that we need devops now?

  1. Argument that many have been doing devops for a long time. What’s changed is that we have a name for it, we have APIs, and we have more tools. The advantage is that we can apply distributed systems principles to networking. Reduction in human error is key. Operational cost is also nice, but a tertiary benefit.

Question 2: Is devops more about culture/attitude than tools?

  1. Scale coupled with required speed of execution means software. No other way to get there. You MUST run software to manage infrastructure effectively at scale.
  2. APIs to twiddle something in the infrastructure is viable – that can be done. But then it opens up a whole other set of potential problems. Devops must drastically simplify the knowledge set required to successfully deploy an application. There’s lots of infrastructure aspects that certain people should not have to know in order to successfully deploy an app.
  3. Uncontrollable speed could have issues in the enterprise without proper controls. There must still be control in a devops processes, or there is risk to the enterprise. Policy controls must be put in place to allow devops to become real, while keeping the speed under control? Facebook is a different world than the typical enterprise.
  4. Contradiction to “uncontrollable speed” point above — having the ability doesn’t mean that you’re running as fast as you can, just like a car with 140mph speedometer isn’t driven that fast by the operator. The issue is how to manage the infrastructure faster with fewer people, but NOT how to just slam in changes as quickly as possible. Execution of a change is distinct from executing a bunch of changes.
  5. There is a boundary where applications can do what they want. When they exceed that boundary (“the box”, they are constrained. Of course, developers don’t want constraints. But really, you want to provide a certain amount of restraint, without going overboard.
  6. Are we jumping ahead when talking about compliance and control? Do we even have the ability to take our hands off of the keyboard for the network control right now?

Question 3: Where do multidisciplinary engineers come from and what skills do they have?

  1. We aren’t training them, and even when we do, someone else seems to poach them. Young kids must be trained to think in a devops way, not in specific disciplines like network engineering.
  2. You still need specialization, though. Otherwise, everyone does everything. You don’t want to see Mike Dvorkin writing device drivers (so he says).
  3. Devops isn’t a job title, it’s not something one person or thing does, it’s more of a way of thinking.
  4. Teams cannot remain in their silos. Engineers must have an appreciation all the way up the stack. Abstracting complexity is important, but reducing complexity is just as important.
  5. Advocacy for a “platform team” as opposed to silos. Think about the IT product holistically as a platform that is consumed.
  6. The majority of networking issues are organizational more than technological. A team culture must be oriented around solving a problem, rather than owning a domain. (My two cents, YES. I’m on the table cheering in my mind.) Often, engineers try to solve complex problem from the perspective of their domain and only their domain. That’s a bad approach, can lead to terrible solutions. “Everything has unintended consequences.”
  7. It all comes back around to access to data. This gives you the ability to solve problems across domains.

Question 4: Is devops and all-or-nothing proposition, or can you deal with it by layers?

  1. Fundamentally, it is all or nothing. You can try to play or experiment with higher layers if, say, your routed infrastructure doesn’t change much.
  2. You have to start somewhere. Think of the low hanging fruit, the simple provisioning process. Or maybe look at an especially painful process that could be automated. But it tends to be easier in the simpler environment, because you need to understand your environment very, very well in order to automate it.
  3. Devops is not a tool or strategy. It’s a culture. It’s about how you think. Pick a particular area, targeting something with low risk — something that will allow you to fail. “Success is a lousy teacher.”

Question 5: How to you maintain discipline (engineers exceeding their boundaries / “the box”) if silos are broken down?

  1. Abstract the infrastructure away to such a point that the individual elements don’t matter. Put another way, that’s the wrong question.
  2. Limit the number of services that can be configured. Stay focused. Define the contracts that your application can consume. Make sure “the box” isn’t too variable.

Question 6: Do we have a good way to properly abstract all of what we need to abstact?

  1. It’s also about consumption models. It’s not enough to abstract infrastructure; it must be abstracted in a way that the infrastructure can be consumed. We’re thinking in the terms of elemental things like VLANs, but that’s not desirable long-term. Too granular, just happens to be state of the art.
  2. We should be declaring what the outcome should be, not specifically defining the path by which the outcome is arrived at. (My two cents: discussion here is somewhat about promise theory, although that hasn’t been explicitly mentioned.)

3:48pm – Closing

That’s the end of my ONUG live blogging! I know this wasn’t my typical writing style, but I hope you found some thought provoking information along the way.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks