From the blog.

Managing Digital Racket
The more I tune out, the less I miss it. But that has presented me with some complex choices for a nuanced approach to curb
Complexity – My Friend, My Enemy
Over my years of network engineering, I've learned that the fewer features you can implement while still achieving a business goal, the better. Why? Fewer

Enterprise QoS Part 06 – Using Cisco AutoQoS as a QoS baseline.

3,244 Words. Plan about 21 minute(s) to read this.

Many large enterprises across the world use Cisco switching gear. As mentioned in previous parts of this series, implementing QoS across disparate devices, even within the Cisco ecosystem, can be frustrating as the syntax varies widely. In an attempt to reduce network operator frustration (as well as human error), Cisco introduced AutoQoS as a means to deploy a templated QoS policy on certain of their devices. I believe exploring the code that Cisco AutoQoS generates on a Catalyst 3750X switch provides a good baseline of QoS implementation specifics for Cisco products. The code provides the opportunity to observe out just how Cisco deploys a QoS policy on one of their switches. We will break out the code section by section and discuss.

I feel I should add a note for those scoffers who feel that automation is somehow cheating, “the easy way out,” or an admission of defeat. I’ve been configuring network devices since 1996. Admittedly, there’s a sense of pride that comes when you sit at the CLI and type the commands that make network magic happen. But as time has worn on, I find that I have less concern about the implementation specifics, and more concern about making sure I’m applying the right tools to solve a specific problem. Put another way, the why is more important to me than the how. Does the how still matter to me? Well, certainly “how” matters. The CCIE program is much more about how than why, as a matter of fact. Believe me, you’d never, ever do in real life some of the  things the CCIE training companies put you through to prepare for the CCIE lab exam. My point isn’t to denigrate the importance of CLI-fu, if that’s your definition of “how”. Instead, my point is that if some automated tools could reliably accomplish a complex task for you, wouldn’t you take advantage of them? Automation saves time, effort, stress (assuming it works properly), and human error.

I see Cisco’s AutoQoS as a gift – a simple way to implement a QoS framework on your network that abstracts away the underlying silicon complexity. Is AutoQoS perfect? That depends on your point of view. As every network is different from every other network, the possibility exists that you’ll need to tweak the results a bit to accomplish some specific goal. However, my experience is that AutoQoS is a great example of, “Start here, and you probably won’t need to change a thing.” Stepping down from the soapbox, using AutoQoS doesn’t mean that network engineers should be ignorant of what it accomplishes. With that in mind, let’s walk through some code, and discuss what’s being accomplished by each section.

By way of introduction, MLS is short for “multilayer switching”, and is a legacy term that hearkens back to the days when a Cisco device switching at layer 2 and layer 3 was a novelty. The “mls” command hierarchy lives on. My comments and explanation are inline going forward.

! Enable QoS features on this switch. They are not enabled by default.
mls qos

! In general, “mls qos map” commands translate one type of mark to another type of mark.
! In this command, DSCP values of 0, 10, 18, 24, and 46 are changed to 8 when traffic with those marks exceeds a specific rate limit defined by a policer.
mls qos map policed-dscp  0 10 18 24 46 to 8

! This maps CoS values to DSCP values, and the numbers are positionally significant. CoS 0 = DSCP 0, CoS 1 = DSCP 8, CoS 2 = DSCP 16, etc.
mls qos map cos-dscp 0 8 16 24 32 46 48 56

! “SRR” stands for “shaped round robin”, and refers to the way packets that are sitting in a port’s buffers are dequeued. The next command groups deal with “input” queues, aka the ingress queues on a switch interface.

! Bandwidth is allocated in this command with weights. A weight of 70 is assigned to queue 1, and a weight of 30 is assigned to queue 2. Note that the values do NOT have to add up to 100. These values are *weights* and not percentages. That said, it is common to have the bandwidth queue assignments equal a percentage, as that is more human-readable.
mls qos srr-queue input bandwidth 70 30

! Setting the threshold below tells the switch at what point it should start tail-dropping queued packets and from which traffic-class. Here, traffic in ingress queue 1 will be evaluated against three thresholds, based on their marks. Traffic that is mapped into threshold value 1 (80%) of ingress queue 1 will be tail-dropped until ingress queue 1 is no longer 80% full. Traffic that is mapped into threshold value 2 (90%) of ingress queue 1 will be tail-dropped until ingress queue 1 is no longer 90% full. There is an unconfigurable threshold 3, whose value is 100%. Typically, less important traffic is measured against threshold 1, more important traffic against threshold 2, and critical traffic against threshold 3.
mls qos srr-queue input threshold 1 80 90

! The commands groups below map specific traffic marks into specific thresholds for ingress queues, as outlined above.
! In this group of commands, frames with CoS mark 3 is mapped into threshold 2 of ingress queue 1. CoS 6 & 7 are mapped into threshold 3 of ingress queue 1. Etc. Don’t obsess overly much about the details of each individual command here. Instead, try to make sense of each command, and then put them into the larger context of the QoS policy. For example, if you realize that an 802.1p CoS mark of “7” usually is placed on traffic used for network control (i.e. very important for keeping the network functioning) and other high-priority traffic, then it makes sense that it would be placed into threshold limit 3, which is 100%. Why? That means that’s the last traffic that would get dropped if the ingress queues are congested. Traffic mapped to lower threshold queues (such as CoS 3 traffic) will be dropped first. Do you see how these seemingly esoteric commands work together to form a policy? A bit complex, but not too hard to put together if you keep the big picture in mind.
mls qos srr-queue input cos-map queue 1 threshold 2 3
mls qos srr-queue input cos-map queue 1 threshold 3 6 7
mls qos srr-queue input cos-map queue 2 threshold 1 4

! In this group of commands, packets with DSCP mark 24 are mapped into threshold 2 of ingress queue 1. DSCP marks 48-55 are mapped to threshold 3 of ingress queue 1. Etc.
mls qos srr-queue input dscp-map queue 1 threshold 2 24
mls qos srr-queue input dscp-map queue 1 threshold 3 48 49 50 51 52 53 54 55
mls qos srr-queue input dscp-map queue 1 threshold 3 56 57 58 59 60 61 62 63
mls qos srr-queue input dscp-map queue 2 threshold 3 32 33 40 41 42 43 44 45
mls qos srr-queue input dscp-map queue 2 threshold 3 46 47

! The “priority-queue” element here is important, as it defines very specific dequeueing behavior for the traffic that ends up in this queue. As the name implies, traffic landing here is treated in a expedited fashion. “Hey, this is the priority queue. We don’t want traffic to hang around here for long. So let’s get you shipped out the door quickly!” A priority queue accomplishes this by choosing a queue to be the priority queue, and then reserving a particular amount of stack or “internal ring” bandwidth for it. The end result is that traffic in the priority queue has less delay and reduced jitter than other traffic traversing the congested interface.

! In this command, ingress queue 2 becomes the priority queue, and has an a bandwidth reservation of 30%. 
mls qos srr-queue input priority-queue 2 bandwidth 30

! The next block of commands function similarly to the input (ingress) versions, except now the queues are egress queues. In addition, there are 4 egress queues, which allows for more granular treatment of traffic, especially when considering 3 threshold levels per queue.

! An important note if you are evaluating these commands for the “big picture” is that egress queue 1 will be a priority queue, if there is a “priority-queue out” statement attached to a specific interface. As AutoQoS would have it, that’s exactly the case. Therefore, it’s safe to assume that these global configuration commands relating to egress queue 1 are in the larger context of being a priority queue.
mls qos srr-queue output cos-map queue 1 threshold 3 4 5
mls qos srr-queue output cos-map queue 2 threshold 1 2
mls qos srr-queue output cos-map queue 2 threshold 2 3
mls qos srr-queue output cos-map queue 2 threshold 3 6 7
mls qos srr-queue output cos-map queue 3 threshold 3 0
mls qos srr-queue output cos-map queue 4 threshold 3 1
!
mls qos srr-queue output dscp-map queue 1 threshold 3 32 33 40 41 42 43 44 45
mls qos srr-queue output dscp-map queue 1 threshold 3 46 47
mls qos srr-queue output dscp-map queue 2 threshold 1 16 17 18 19 20 21 22 23
mls qos srr-queue output dscp-map queue 2 threshold 1 26 27 28 29 30 31 34 35
mls qos srr-queue output dscp-map queue 2 threshold 1 36 37 38 39
mls qos srr-queue output dscp-map queue 2 threshold 2 24
mls qos srr-queue output dscp-map queue 2 threshold 3 48 49 50 51 52 53 54 55
mls qos srr-queue output dscp-map queue 2 threshold 3 56 57 58 59 60 61 62 63
mls qos srr-queue output dscp-map queue 3 threshold 3 0 1 2 3 4 5 6 7
mls qos srr-queue output dscp-map queue 4 threshold 1 8 9 11 13 15
mls qos srr-queue output dscp-map queue 4 threshold 2 10 12 14

! These output (egress) threshold commands establish the tail drop behavior for queues as they fill up with different classes of traffic. Note the keyword “queue-set”. You can define two different queue sets, and then apply one queue-set to one interface, and a different one to a different interface, resulting in different queueing behavior.

! The numbers in each command do not have obvious meaning without explanation, so we’ll take the first command below as an example, using each number in order.

! “output 1” = queue-set 1 (all interfaces are in this queue set by default.)

! “threshold 1” = set the thresholds of queue 1. Several different classes of traffic have been mapped into these particular queues & thresholds in the commands above. (Big picture, remember.)

! “100 100” = these numbers indicate two drop threshold percentages. Threshold 1 is represented by the first number, and threshold 2 the second.
! “50” = a percentage of allocated memory that is reserved for the queue, no matter what.
! “200” = the maximum amount of memory that the queue can use before the queue is officially full and frames are dropped. This is a percentage, and implies that you can oversubscribe the queue if there’s enough available buffer memory for the switch to draw from.

mls qos queue-set output 1 threshold 1 100 100 50 200
mls qos queue-set output 1 threshold 2 125 125 100 400
mls qos queue-set output 1 threshold 3 100 100 100 400
mls qos queue-set output 1 threshold 4 60 150 50 200

! This command allocates what percentage of available buffer space will be allocated to which queues. In this example, queue-set 1 allocates 15% to queue 1, 25% to queue 2, 40% to queue 3, and 20% to queue 4. The practical effect of this is that a switch interface can store more (or less) of a particular class of traffic in an egress queue when the interface is congested.
mls qos queue-set output 1 buffers 15 25 40 20

We’re moving into a new section of AutoQoS policy code at this point. Now we’ve moved into the MQC language, where traffic classes are defined by class-maps, class-maps are referred to in policy-maps, and policy-maps are applied to interfaces.

! Class maps define a “class” of traffic that a policy can act on. AutoQoS is defining 3 classes here. Traffic marked with “EF”, traffic matching the “AUTOQOS-ACL-DEFAULT” access list (which = ip any any), and traffic marked with “CS3”. By itself, simply defining classes does nothing to impact traffic, just like creating an access-list by itself does nothing. Class-maps are part of the larger QoS system.

class-map match-all AUTOQOS_VOIP_DATA_CLASS
  match ip dscp ef
class-map match-all AUTOQOS_DEFAULT_CLASS
  match access-group name AUTOQOS-ACL-DEFAULT
class-map match-all AUTOQOS_VOIP_SIGNAL_CLASS
  match ip dscp cs3
!
ip access-list extended AUTOQOS-ACL-DEFAULT
  permit ip any any

! The policy-map paragraph defines queueing behavior for the three different traffic classes defined in the class-map statements above. Not to beat this idea to death, but remember that until this policy-map is applied to an interface, it has no impact on queueing behavior.

! The AUTOQOS_VOIP_DATA_CLASS is named a little bit confusingly for my tastes, as I typically think of “VOIP” and “DATA” as two different classes of traffic, i.e. “VOIP” is something special, because after all – everything is actually data. But what Cisco is really getting at here is that this class is probably real-time voice traffic, since it’s matching traffic that is marked with a value of DSCP “EF” or 46 in decimal, the traditional marking for real-time voice.

! The point of this policy map is to make sure that real-time voice and signaling traffic (call setup) is prioritized over all other classes of traffic. This is accomplished with a policer. Here, a policer (a rate-limiter) sets a rate-limit for traffic of a particular class. Traffic in excess of the policed rate will have its DSCP value changed to the rate defined in the “mls qos map policed-dscp” we find earlier in the AutoQoS generated code, due to the “policed-dscp-transmit” action. Note that “drop” is the other action.

! Without a packet capture tool and live data stream (which I don’t have handy), I can’t comment with confidence on what the “set dscp” command in each class paragraph is achieving. If class traffic does not exceed the policer rate, then I assume all traffic is set to the value indicated. The way the class-maps are written, this is redundant for the VOIP_DATA and VOIP_SIGNAL classes. In the case of the DEFAULT class, it seems that all traffic is being reset to the default DSCP value, no matter what other value it might have had. On the other hand, if the traffic does indeed exceed the policed rate, will the “set” command in the policy take precedence? Or will the “mls qos map policed-dscp” command take precedence. I’m not sure, but will find out and update this post later.

! More important to grasp is the overall effect of the policy. What we end up with is real-time voice traffic being sent along marked at EF. Voice codecs generally require 8Kbps – 64Kbps range, but here the policer is allowing for 128Kbps. In theory, we’d never exceed the policed rate for voice traffic, at least not on an access port. The same logic holds true for call signaling, where it’s unlikely we’d ever exceed 32Kbps. In this policy, any other traffic falls through to the default class, and could be re-marked to default value if the data stream exceeds 10Mbps. This is less likely than it might seem, as typical user workstations simply don’t send that much data as compared to a server.

policy-map AUTOQOS-SRND4-CISCOPHONE-POLICY
 class AUTOQOS_VOIP_DATA_CLASS
   set dscp ef
  police 128000 8000 exceed-action policed-dscp-transmit
 class AUTOQOS_VOIP_SIGNAL_CLASS
   set dscp cs3
  police 32000 8000 exceed-action policed-dscp-transmit
 class AUTOQOS_DEFAULT_CLASS
   set dscp default
  police 10000000 8000 exceed-action policed-dscp-transmit

! Everything we’ve looked at so far have been global commands. Now, let’s look at what AutoQoS has applied to an individual interface that’s been instructed to use a “cisco-phone” profile. To simplify the process, I’ve stripped out all of the non-AutoQoS related commands that you might  see on an access port, such as switchport definitions, port-security, spanning-tree tweaks, storm-control, access and voice VLANs, etc. Let’s take the AutoQoS generated commands one at a time.

! “srr-queue bandwidth share” established a weight to be given to traffic that has landed in each of the four egress queues. These numbers are weights that form a ratio, not percentages that must add up to 100. Here, queue 1 has a weight of 1, 2 has a weight of 30, etc. Therefore, traffic mapped to each queue is guaranteed a minimum amount of bandwidth, but can use more bandwidth if it is available. An important note is that queue 1’s value is irrelevant here because of the presence of the “priority-queue out” statement, which means traffic in queue 1 gets special treatment. Therefore, the weight value assigned to queue 1 is ignored in the SRR calculation.

! “priority-queue out” instructs the interface to make egress queue 1 an expedited queue. Traffic that lands in this queue includes DSCP EF (46) traffic, used by telephony applications to mark voice traffic, as indicated by the global “mls qos srr-queue output dscp-map queue 1 threshold 3 46 47” statement above.

! “mls qos trust device cisco-phone” is telling the switch that it’s okay to trust the marks on traffic flowing into the port, assuming the device on the other end identifies itself as a Cisco phone using Cisco Discovery Protocol (CDP).

! “mls qos trust cos” instructs the switch to trust the 802.1p CoS value assigned to the Ethernet frame flowing into the interface. CoS values can then be mapped to DSCP values in the ToS byte of the IP header based on the “mls qos map cos-dscp 0 8 16 24 32 46 48 56” global command.

! “auto qos voip cisco-phone” is one of several commands an operator could type to kick off AutoQoS provisioning for the interface, depending on the template chosen. Here, we’ve chosen the template of a Cisco phone. If the switch had never had AutoQoS run before, this would create the global command framework in addition to the interface specifics. Note that executing this command will possibly result in one of those uncomfortable CLI pauses network engineers hate. Note that the Cisco documentation for the “auto qos” command hierarchy goes into detail on configuration for marking, mapping of traffic to queues, ingress & egress queue configurations, and caveats.

! “service-policy input AUTOQOS-SRND4-CISCOPHONE-POLICY” applies the policy map configured above to traffic flowing into the interface. Once again, it’s nominally confusing as to what the actual state of traffic marks will be once they leave the interface, as Cisco documentation notes the following, “Classification using a port trust state (for example, mls qos trust [cos | dscp | ip-precedence] and a policy map (for example, service-policy input policy-map-name) are mutually exclusive. The last one configured overwrites the previous configuration.” However, I can say from experience having deployed this scheme, and considering that the values referred to in both the CoS to DSCP maps and in the policy-maps are consistent, that it doesn’t matter especially much. Voice & signaling traffic will leave the switch with the right mark.

interface GigabitEthernet1/0/1
 srr-queue bandwidth share 1 30 35 5
 priority-queue out
 mls qos trust device cisco-phone
 mls qos trust cos
 auto qos voip cisco-phone
 service-policy input AUTOQOS-SRND4-CISCOPHONE-POLICY

Aside from abuse of the reader, I have a couple of reasons I went through that exercise of breaking down an AutoQoS-generated policy line by line. One is that I want to point out the complexity of building a cohesive QoS policy on a layer 2 / layer 3 switch. To come up with a scheme, validate the scheme for your enterprise, and deploy the scheme would be a lot of hard work, as any network engineer who’s been through the exercise can tell you. In addition, many of these commands overlap in functionality, and so there’s a challenge in determining what command will take action under what circumstances. Implicit in any complex task is the ability to get some element of it wrong, and negatively impact your business applications. And perhaps the most complex bit of all of this code is that 99%+ of the time, your network just won’t leverage it. While marking policies are relevant no matter what, queueing & tail-drop policies only matter if an interface is congested. User-facing access layer switches in enterprise networks are almost never congested. The point being that you could well write your own QoS policy and get some of it wrong, but never know it until that rare case of access switch network congestion crops up, and your users begin to complain about poor call quality.

Another reason I had for going through this is to demonstrate balance when dealing with network automation. I am a believer in automating repetitive network tasks as much as possible, and deploying a cohesive QoS scheme across the enterprise is great use-case for automation. But I also believe that automation doesn’t excuse the network engineer from what’s actually going on under the hood. “I used AutoQoS, and the phones sound great!” sounds like something you want to be able to say, right up to the point where the phones don’t sound great, and you have no idea how to troubleshoot the problem.

Search for all parts in the Enterprise QoS series.