Network engineers the world over have a loathing for quality of service (QoS). Next to IP multicast, QoS ranks near the top of the list of frustrating network technologies. But not for me. I thoroughly enjoy QoS. Why? QoS addresses real world problems of traffic not being delivered on time or at all. When correctly applied, QoS strategies affect real, positive changes for applications riding across a busy network. Done right, QoS helps ensure that traffic is delivered every time, and on time.
I have been fortunate enough to deploy network-wide QoS at a couple of enterprises, and I’m currently evaluating a QoS strategy for a third enterprise that currently doesn’t have one. As with any network deployment, the problems I’ve leveraged QoS to solve have been similar, yet unique. For example…
- Different applications require different QoS treatments.
- Different physical media require different QoS implementation techniques.
- Different hardware platforms have different QoS capabilities that require different code.
- Sundry applications mark their traffic, but with no predictable strategy, and no guarantee that traffic is marked at all.
In this series, I’m going to talk through an enterprise QoS deployment strategy based on my experience. I’m not trying to write a book; this series was inspired by a QoS workshop I’m writing for Interop NYC 2013. So, I’m going to stick with a specific enterprise scenario, and then build an QoS strategy for it. The scenario I’ll be following is as follows:
- This is an enterprise (i.e. not service provider) network. While service providers certainly offer various classes of service across their infrastructures, many of the techniques used (reservation of bandwidth across an specific MPLS path, for instance) are not typically used in enterprise environments.
- The enterprise features a campus Ethernet running IPv4.
- The enterprise connects to several remote offices via a wide area network. The WAN is supplied via an L3VPN MPLS service provider that hands off Ethernet or HDLC to the enterprise’s edge router. The enterprise owns the WAN edge router; it is not managed by the carrier.
- The enterprise runs voice, video and other data types across their infrastructure.
The intent of this scenario is address QoS for a common enterprise topology running a common combination of traffic types. Storage over IP is also common (iSCSI, NFS), but tends to be isolated to the data center network segments of the enterprise. Applying QoS for storage traffic is a different sort of problem, and one that I plan to address in some blog posts & a Packet Pushers podcast of its own. The focus here will be on doing our best to guarantee timely delivery of a variety of traffic classes between endpoints that are potentially quite far from each other, at least in a network sense.
I’ve broken the series into several parts as listed below, which will post over the next several days. If QoS interests you, I hope you follow along. Please enjoy, and send your feedback to me via firstname.lastname@example.org.
- What is QoS, what does it do, and why do network engineers hate it?
- Why do some applications require QoS, while others do not?
- Isn’t packet delivery at all costs the most important thing?
- Can someone please explain all of these QoS terms?
- A consistent QoS strategy: L2 & L3 traffic marking & Cisco AutoQoS.
- A consistent QoS strategy: queueing collaboration applications.
- A consistent QoS strategy: shaping to match far-end bandwidth while still prioritizing.
- A consistent QoS strategy: end-to-end packet walk – congested vs. non-congested.
- QoS corner cases: throttling a bandwidth hog.
- QoS corner cases: tunneled traffic.
- QoS corner cases: DSCP mutation maps.