From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer

OECG – Chapter 15

534 Words. Plan about 3 minute(s) to read this.

This section of the chapter discusses layer 2 queuing and drop methods on the 3550 platform, with a bit of attention paid to the 2950 by way of comparison. I’ll try to be brief, as I suspect most of us aren’t often worried about managing our “cheap” bandwidth. Many of us are probably running gig to the desktop, and congestion is rarely an issue. But for those shops where it’s a concern (you guys running IPT to your video editing engineers across the same wire), these points will be salient.

Cisco 3550s queue inbound traffic with FIFO while waiting for the outbound queue – pretty boring stuff. Outbound traffic has more complicated queuing.

  • Up to 4 queues per interface are supported.
  • Traffic for ethernet frames is split based on CoS.
  • Scheduling is performed with weighted-round-robin.
  • Optionally, you can configure an expedited/priority queue.
  • When the frame is to be forwarded, the following takes place:
    • The frame’s DSCP is inspected to determine the CoS value.
    • The switch looks at the shiny new CoS value and determines therefore which queue to place the frame in.
  • Weighted Round Robin (WRR) is the scheduler servicing the queues. His algorithm is based on number of frames, rather than bytes. So if you’ve got “wrr-queue bandwidth 10 20 30 40”, WRR would send 10 frames from the first queue, 20 from the second, etc.
  • Queue #4 can be a priority queue, if you use the “priority-queue out” command.

During times of congestion, you can use either WRED or tail-drop to manage the discards, at least on gigabit interfaces. Fast ethernet interfaces use neither WRED or tail-drop, but the book doesn’t get into what they do use…and I don’t want to look it up because my head is going to explode if I don’t finish this chapter soon. Very soon.

  • Each outbound queue has 2 WRED thresholds.
  • Thresholds are considered percentages of the queue length.
  • You can tweak the thresholds to be different for each outbound queue.
  • If the queue-depth is below the threshold, then there’s no dropping.
  • If the queue-depth is above the threshold, then a percentage of packets is dropped in a linear fashion, increasing from 0 to 100%, as the queue depth grows from 0 to 100%.
  • You can map DSCP values to particular thresholds using the “wrr-queue dscp-map” command.
  • Enable WRED with a “wrr-queue random-detect”. This consequently disables tail-drop. This is effective for all queues on the configured interface. However, if you want the other queues to behave like WRED and not tail-drop, you’ll need to configure thresholds other than the default of 100%.
  • For tail-drop, there is some tweakable behavior. “wrr-queue threshold” allows you discard all frames below a threshold you set. Couple that with the “wrr-queue dscp-map” command, and you can tail-drop in a biased way, dropping frames with a particular DSCP value before others.

Finally, when comparing the 2950 to the 3550, consider the following differences between the platforms:

  • On a 2950, you can set queue weights only globally. On a 3550, you can do it per interface.
  • On a 2950, you can set CoS-to-queue mappings only globally. On a 3550, per interface.
  • On a 2950, you establish a priority/expedite queue by setting the weight of queue 4 to “0”.
  • On a 2950, WRED is not supported.
  • On a 2950, the queue scheduler is “strict priority”, whereas the 3550 is WRR.