From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer

72% of Networking Performance Statistics Are Misleading

465 Words. Plan about 3 minute(s) to read this.

One element of the networking industry’s marketing machine is the citing of performance statistics. This { box | software package | interface } can perform this many operations this quickly. Statistics are nice for technically minded people. They make us feel like we’re making an informed decision about a product. “Well, gosh. This product is faster than that one, see? Clearly, it’s superior.”

Like my tongue-in-cheek title, performance statistics are often misleading or, at best, meaningless without context. As a savvy consumer of any networking product, you should look at performance statistics as little more than a rough indicator of how a { box | software package | interface } performed under a specific test circumstance. Hint: the tests are usually rigged. Specifically, networking device performance tests are rigged such that the test data thrown at them are going to show the product in the best possible light. That test data is not going to look much like your network, which is why the statistics are misleading.

Here’s what you need to ask yourself when it comes to reported performance statistics.

  1. Who did the testing, and how were they paid? If the vendor did their own testing…well…no doubt the tests have some root in reality, but it’s unlikely they’ll display the ugly underneath. If the testing was independent, but commissioned by the vendor, well…then you need to dig into the specific testing methodology. Was it a test designed to favor a predetermined outcome? Hint: yes, most likely. If the testing was independent and funded by a consortium of vendors, then I think it’s more likely to be useful data.
  2. What sort of testing was done? For example, was the traffic mixed? Of varying packet sizes? At what scale? Depending the hardware and software being tested, different traffic mixes and rates can result in some very different results. In the emerging world of networking software and SDN controllers, the concerns will be about software performance and scalability, thinking more in terms of operations per second than raw throughput. Again, different sorts of tests are likely to result in different sorts of numbers.

You need to do your own testing before committing to any product. Marketing performance statistics are only a general indication of how well the { switch | controller | firewall } will perform in your specific environment. For those of you with limited scale requirements, this might not be a big deal. But for those operating large environments for whom maximum performance is key, take the claims with a grain of salt. Do your own testing with your own real data.

Oh — and then do the rest of us a favor: publicize your testing methods and results. The FUD of marketing hyperbole is a tedious weight hanging around the neck of this industry. We could do with more end user testing data.