From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer
Brief Me At Interop?
Want to brief me about your product or otherwise have a chat? Send an e-mail to ethan.banks@packetpushers.net while there's still room on my calendar. I

Thinking Through A Mellanox Dual-Tier Fixed Switch 10G + 40G Fabric Design

When preparing for “The Ethernet Switching Landscape” presentation at Interop, I did quite a bit of reading through vendor marketing literature and documentation. One proposed fabric design by Mellanox stuck in my brain, because I was having a hard time mapping the numbers in their whitepaper to what the reality would look like. Here’s the slide I built around a section in the Mellanox whitepaper.

The point of this design was to demonstrate how to scale out a fabric supporting a large number of 10GbE access ports using fixed configuration Mellanox switches.

Read more

The Ethernet Switching Landscape – Part 05 – Equal Cost Multipath (ECMP)

This is one of a multi-part series on the Ethernet switching landscape I wrote to support a 2-hour presentation I made at Interop Las Vegas 2014. Part 1 of this written series appeared on NetworkComputing.com. Search for the rest of this series.

In data center design, the ability to use all available links to forward traffic is an important consideration. Not only is using all link capacity wise from a network design standpoint,

Read more

The Ethernet Switching Landscape – Part 03 – Different speeds for different needs.

This is one of a multi-part series on the Ethernet switching landscape I’m writing in preparation for a 2-hour presentation at Interop Las Vegas 2014 (hour 1 & hour 2). Part 1 of this written series appeared on NetworkComputing.com. Search for the rest of this series.

When considering an Ethernet switch, the sort of a network you’re trying to build factors into the decision.

Read more

Evaluating An IT Purchase: Take Your Eyes Off The Shiny

The sales cycle. Go ahead and groan. No matter where you are in the IT ecosystem, all of us have to deal with purchasing technology – from the CIO on down to the help desk technician working inbound triage. Sales cycles are *awful* things, filled with presentations, evaluations, pricing exercises, parts lists, bundling deals, end-of-quarter incentives, trade-ins, lead times, and the odd lunch or two. The sad part? Dealing with vendor sales folks is all too often adversarial in nature –

Read more

The Ethernet Switching Landscape Part 02 – Finding meaning in latency measurements.

This is one of a multi-part series on the Ethernet switching landscape I’m writing in preparation for a 2-hour presentation at Interop Las Vegas 2014 (hour 1 & hour 2). Part 1 of this written series appeared on NetworkComputing.com. Search for the rest of this series.

One often quoted statistic in Ethernet switch specifications is latency. Latency figures are usually cited in microseconds or nanoseconds.

Read more

Do You Really Need Gigabit To The Desktop? Maybe. Maybe Not.

The topic came up recently that a segment of user workstations on a network I manage were only serviced by 100Mbps ports, and not gigabit. None of us especially had a problem with that, as 100Mbps is an awful lot of bandwidth for the typical desktop user. If 100Mbps seems parsimonious, consider the following.

1. I work from home on a broadband 35Mbps/2Mbps connection.

  • Almost every service I use is outside of my house.

Read more

Firewall Administration for Sysadmins in Four Parts

I wrote a long blog post for Network Computing that ended up published in four parts. The topic was helping sysadmins understand what firewall appliances do, and therefore how best to ask for firewall assistance from those who manage them.

Firewall Administration For Sysadmins: A Primer

Firewall configurations can be astonishingly complex. Firewall administrators deserve love and respect, as making the firewall not only pass traffic, but also pass it securely,

Read more

Enterprise QoS Part 09 – A consistent QoS strategy: end-to-end packet walk – congested vs. non-congested.

If you’ve made it this far into the series, I have one simple point about QoS policy effectiveness that I want to bring home in this post before going through a couple of packet walks. The point is this. If an interface isn’t congested, your QoS policy dealing with congestion isn’t impacting traffic. Of course, rate limiters & marking policies will be effective whether your interface is congested or not, as the point of them is to throttle a traffic class to a specific transmission rate or mark traffic with a specific value.

Read more

Enterprise QoS Part 08 – A consistent QoS strategy: shaping to match downstream bandwidth while still prioritizing.

When dealing with the WAN, a common problem is that the actual available bandwidth of a circuit might different from the bandwidth of the physical circuit handoff. For example, a carrier might provide an enterprise with a gigabit Ethernet handoff, when in fact the connection is being throttled to 100Mbps downstream. A similar sort of problem appears when head-end bandwidth differs from remote site bandwidth. For example, a headquarters site might have a full DS3 connection,

Read more

Enterprise QoS Part 07 – A consistent QoS strategy: queueing collaboration applications at the WAN edge.

As traffic flows across an enterprise’s network, there often comes a point where some part of the infrastructure is not owned by the enterprise. For example, enterprises with offices spread across several different cities usually rely on a telecommunications provider to connect the offices together. The telecom provider will layer the enterprise’s traffic on top of their own infrastructure, commonly in the form of an L3VPN. To the enterprise, the connection handed off to them by the provider is a private one providing access to their remote offices.

Read more

Enterprise QoS Part 06 – Using Cisco AutoQoS as a QoS baseline.

Many large enterprises across the world use Cisco switching gear. As mentioned in previous parts of this series, implementing QoS across disparate devices, even within the Cisco ecosystem, can be frustrating as the syntax varies widely. In an attempt to reduce network operator frustration (as well as human error), Cisco introduced AutoQoS as a means to deploy a templated QoS policy on certain of their devices. I believe exploring the code that Cisco AutoQoS generates on a Catalyst 3750X switch provides a good baseline of QoS implementation specifics for Cisco products.

Read more

Enterprise QoS Part 05 – A consistent QoS strategy: L2 & L3 traffic marking.

A significant part of the challenge of delivering a QoS strategy to a network is in the execution. How, exactly, does one write a QoS policy that will accomplish business goals consistently across a diverse network infrastructure? There is no obvious answer to this question (and large books have been written on the topic), as the types of QoS tools to apply differ by network location and by the types of problems that can be experienced at each of those locations.

Read more

Enterprise QoS Part 04 – Can someone please explain all of these QoS terms?

Like any IT discipline, QoS is awash in terminology & acronyms. I’m going to tackle the most common QoS terms here, and try to provide some context in my definitions. Ideally, you’ll know not just a definition of the term, but also how the term fits into the larger QoS ecosystem.

ToS – ToS stands for “type of service.” The ToS value is stored in a byte of the IP header of a packet.

Read more

Enterprise QoS Part 03 – Isn’t packet delivery at all costs the most important thing?

When designing a QoS scheme to apply to an enterprise network, a common misconception is that delivery of all packets is the most important thing. The logic goes that an application is best served if all of the packets of a transaction are delivered, no matter what it takes to make that happen. Network engineers often go down the road of enlarging buffers as much as they can, in the hopes of lessening packet drops.

Read more