293 Words. Plan about 1 minute(s) to read this.
Custom queuing, or CQ, responds to the shortcomings of PQ by guaranteeing that every queue gets some kind of minimum amount of bandwidth. The idea is to avoid queue starvation; queues need to eat, and bandwidth is their food. CQ provides 16 queues you can tweak, plus a system queue that’s always there and hidden, which is a tweak-free zone. The downside of this approach is that no one queue can be used as a low-latency queue. In PQ, the “high” queue was implicitly low-latency; it ruled the roost, starving all competing queues of bandwidth, giving traffic that flowed through it phenomenal service. In CQ, no queue can starve everyone else out because everyone has to share, thus making CQ a poor queuing tool if you have some traffic that you need to make sure is always going to get where its going in a timely, low-latency fashion (like voice).
The queues in CQ are not prioritized. Rather, each queue is assigned a implied percentage of bandwidth. CQ queues are technically assigned a byte counts that will be serviced each time the CQ scheduler comes by; do the math to determine what percentage of bandwidth this would take during a time of congestion. The CQ scheduler services each queue round-robin. If there’s a packet in a queue, CQ will continue forwarding packets from the queue until the bytes transmitted from that queue meet or exceed the allotted byte count, or the queue is emptied. Then CQ moves on to the next queue.
CQ queue lengths are all 20 by default, since philosophically, no one queue is better than the next. Tail-drop is also used here (CQ will drop a packet if it tries to put the packet in a queue, and the queue is full).
Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks