From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer

Enterprise QoS Part 03 – Isn’t packet delivery at all costs the most important thing?

769 Words. Plan about 5 minute(s) to read this.

When designing a QoS scheme to apply to an enterprise network, a common misconception is that delivery of all packets is the most important thing. The logic goes that an application is best served if all of the packets of a transaction are delivered, no matter what it takes to make that happen. Network engineers often go down the road of enlarging buffers as much as they can, in the hopes of lessening packet drops.

In service provider networks, this problem has become a well-known epidemic termed “bufferbloat.” My understanding is that service providers have contractual obligations to deliver packets, e.g. meet their service level agreements, and bloated buffers is one of their strategies to deliver under any circumstances. Unfortunately, bufferbloat often does applications more harm than good. While service providers aren’t necessarily in the business of providing excellent user application experiences (any of my SP friends feel free to augment my impression here), enterprise IT teams decidedly are in the business of delivering excellent user application experiences.

Remember the point I made in the last part about TCP being acknowledged and UDP being unacknowledged? Here’s the thing about TCP. If a packet doesn’t make it to the recipient, the recipient will not send an acknowledgement. If the sender doesn’t get an acknowledgement, then the sender will retransmit the unacknowledged data. Bufferbloat can negatively impact TCP’s natural recovery mechanisms here – it’s all a matter of timing. Let’s walk through a scenario.

  1. A sender transmits data using TCP.
  2. The TCP segment transmission path includes a congested link, and the IP packet is queued into a buffer.
  3. The buffer is very large, and the IP packet arrived when the queue was nearly full, causing a long delay in transmission.
  4. The sender detects that the recipient never acknowledged receipt of the TCP segment, and retransmits.
  5. While number 4 happened, the busy buffer was finally able to transmit the original TCP segment.
  6. The original TCP segment is finally received.
  7. The retransmitted, identical TCP segment is also received, but as a duplicate.

Yuck. Not only does this sort of thing impact application performance and probably the user experience, but the link congestion problem is exacerbated by the fact that twice the amount of data transited the overworked link. What would have been an improvement? If the original TCP segment was dropped, instead of being stuck at the end of an excruciatingly large buffer.

The greater point here is that buffer tuning should be undertaken cautiously by network engineers who understand their application behaviors extremely well. And by extremely well, I mean that packet analysis has been performed to aid comprehension of average packet sizes and gain knowledge about how the TCP/IP stack of the host system reacts in the face of packet loss. Overall throughput is not helped by oversized buffers, and application performance can in fact suffer. Buffers are certainly useful to catch a sudden, unusual traffic burst that can be serviced quickly, but a big buffer sitting in front of a habitually congested interface is a bad idea. You’ve just added latency to an interface that’s functioning at maximum capacity already.

If you’re still stuck on the idea that packet loss is inherently bad, remember that we’ve established that TCP can handle packet loss just fine. That’s what TCP was built to do: function reliably when transiting a network of unknown capacity whose topology and transmission characteristics could change at any given point in time.

That’s fine for TCP, but what about our real-time voice & video problem child, UDP? We know that UDP traffic isn’t going to be retransmitted if it’s dropped. The sender just keeps on blindly sending. Do big buffers make sense in that case? The answer in the case of real-time UDP applications is decidedly no – big buffers are not the right direction to go. Instead, the best answer for real-time UDP applications is to make sure that there will always be enough bandwidth available over shared links for the traffic to be delivered. In addition, the network device that is handling dequeuing of buffered traffic on a congested interface must be able to distinguish real-time voice traffic as sensitive to being stuck in the buffer for too long, and therefore dequeue that traffic in a timely way & on a regular interval. These requirements can be satisfied with a low-latency queue that not only reserves interface bandwidth for the traffic class that uses it, but also dequeues traffic in a way that minimizes jitter – the variableness of the length of time in between packets.

Search for all parts in the Enterprise QoS series.