816 Words. Plan about 3 minute(s) to read this.
When dealing with the WAN, a common problem is that the actual available bandwidth of a circuit might different from the bandwidth of the physical circuit handoff. For example, a carrier might provide an enterprise with a gigabit Ethernet handoff, when in fact the connection is being throttled to 100Mbps downstream. A similar sort of problem appears when head-end bandwidth differs from remote site bandwidth. For example, a headquarters site might have a full DS3 connection, while remote offices only have T1s.
These situations put QoS policies in jeopardy, because congestion will not occur until after the traffic has already left the WAN egress interface. If there is no congestion on the interface, then prioritization of traffic can’t happen. The circuit will happily forward traffic in an uncongested fashion – first in, first out. Downstream, traffic will hit the bottleneck, and excess traffic will be dropped until TCP steps down the traffic flow rate. This will happen at points beyond your control. Assuming you have engaged in the carrier’s QoS scheme, it is likely that the carrier will prioritize according to their scheme. For example, voice traffic hitting the “P1” queue in my previous example would presumably be dequeued and sent across the cloud first, although I wouldn’t necessarily count on it. And to speak to the point of this article, there is a modification that a network engineer can make to the QoS policy to handle bandwidth mismatch.
The root of the problem I’ve described is that traffic is not actually congested at the local interface. Even though somewhere in the cloud there is a congestion point, the local interface is happily transmitting at a higher physical data rate. Therefore, the local interface’s QoS policy is never triggered, as there are never packets waiting in the queue to be sent. The solution to this problem is to create artificial congestion by using a traffic shaper. The idea is this. If you have a gigabit Ethernet interface to your carrier, but only get 100Mbps from them, add a 100Mbps traffic shaper to the link as a QoS policy. Then nest inside of the shaping policy your prioritization policy. As traffic is queued by the shaping policy, dequeueing that shaped traffic will be governed by the nested prioritization policy. In Cisco-speak, this technique is called “Nested Hierarchical Policies” or “Hierarchical Traffic Policies,” one example of which can be seen here.
Let’s look at an example of our own here. We will shape traffic on a gigabit Ethernet link to 100Mbps to match the downstream bandwidth. We will expand on the class maps and policy built in the previous part of this series. Therefore, I won’t break it down in detail, except to point out the differences.
class-map match-any WAN-PROVIDER_P1
description REAL-TIME VOICE
match precedence 5
class-map match-any WAN-PROVIDER_P2
description STREAMING VIDEO
match precedence 4 6 7
class-map match-any WAN-PROVIDER_P3
description CALL SIGNALLING
match precedence 2 3
class-map match-any WAN-PROVIDER_P4
description BEST EFFORT
match precedence 0 1
shape average 100000000
ip address 10.11.12.2 255.255.255.252
service-policy output OUTBOUND
Note that in this policy, there are two policy-maps. One is called PRIORITIZE, and the other OUTBOUND. The OUTBOUND policy consists of a traffic shaper that matches “class-default”. Since class-default is the only class defined in the policy, it will apply to all traffic. The traffic will be shaped to a rate of 100000000 bits, which equals 100Mbps, our target rate to match the downstream rate limiter the carrier introduces. With that achieved, we’ve created the potential for artificial congestion. If we try to put more than 100Mbps of traffic through the link, the shaper will begin queueing traffic to the degree that the burst rate of our traffic shaper can accept excess traffic. Dequeueing of the excess traffic will be governed by the nested policy, indicated by the “service-policy PRIORITIZE” statement. Therefore, we can enforce a LLQ for voice traffic and CBWFQ for other traffic classes.
This technique is straightforward when shaping all traffic to a single rate, and then attaching a traffic queueing policy to the shaping policy. With some modification, the same technique can be used to shape traffic to match the far-end bandwidth of a remote circuit, such as a local T3 communicating with a remote T1. Instead of having all traffic (class-default) shaped to a specific rate, a class would be built for every remote office circuit, shaped to the proper rate. This doesn’t scale well if there are a large number of offices, and makes manual management of the QoS policy quite tedious. A better choice would be a management tool to handle the QoS policies for you. While I have not used it myself, I have heard good reports of ActionPacked Networks LiveAction QoS Configure tool.
Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks