Class Based WFQ and Low Latency Queuing

Cisco created CBWFQ and LLQ using some of the best concepts from PQ, CQ, and WFQ, while adding several additional features. CBWFQ reserves bandwidth for each queue, and provides the ability to use WFQ concepts for packets in the default (class-default) queue. LLQ adds to CBWFQ the concept of a priority queue, but unlike PQ, LLQ prevents the high-priority queue from starving other queues. Additionally, both CBWFQ and LLQ use MQC for configuration, which means that they have robust classification options, including NBAR.

CBWFQ and LLQ use almost identical configuration; the one major difference is whether the bandwidth command (CBWFQ) or the priority command (LLQ) is used to configure the tool. Because both tools use MQC, both use class maps for classification and policy maps to create a set of classes to be used on an interface. The classes defined in the policy map each define a single queue; as a result, the terms queue and class are often used interchangeably when working with LLQ and CBWFQ.

CBWFQ and LLQ support 64 queues/classes. The maximum queue length can be changed, with the maximum possible value and the default length varying based on the model of router and the amount of memory installed. They both also have one special queue called the class-default queue. This queue exists even if it is not configured. If a packet does not match any of the explicitly configured classes in a policy map, IOS places the packet into the class-default class/queue. CBWFQ settings can be configured for the class-default queue.

The sections that follow cover the details of CBWFQ, and then LLQ.

CBWFQ Basic Features and Configuration

The CBWFQ scheduler guarantees a minimum percentage of a link's bandwidth to each class/queue. If all queues have a large number packets, each queue gets the percentage bandwidth implied by the configuration. However, if some queues are empty and do not need their bandwidth for a short period, the bandwidth is proportionally allocated across the other classes. (Cisco does not publish the details of how CBWFQ achieves these functions.)

Table 15-5 summarizes some of the key features of CBWFQ.

Table 15-5 CBWFQ Functions and Features

KEY POINT

Table 15-5 CBWFQ Functions and Features

CBWFQ Feature

Description

Classification

Classifies based on anything that MQC commands can match

Drop policy

Tail drop or WRED, configurable per queue

Number of queues

64

Table 15-5 CBWFQ Functions and Features (Continued)

KEY POINT

*Cisco 7500 series routers support FIFO or WFQ in all the CBWFQ queues.

Table 15-6 lists the key CBWFQ commands that were not covered in Chapter 14.

Table 15-6 Command Reference for CBWFQ

Command

Mode and Function

bandwidth {bandwidth-kbps | percent percent}

Class subcommand; sets literal or percentage bandwidth for the class

bandwidth {remaining percent percent}

Class subcommand; sets percentage of remaining bandwidth for the class

queue-limit queue-limit

Class subcommand; sets the maximum length of a CBWFQ queue

fair-queue [queue-limit queue-value]

Class subcommand; enables WFQ in the class (class-default only)

max-reserved-bandwidth percent

Interface subcommand; defines the percentage of link bandwidth that can be reserved for CBWFQ queues besides class-default (default: 75 percent)

Example 15-4 shows a simple CBWFQ configuration that uses the class-default queue. The configuration was created on R3 in Figure 15-5, using the following requirements:

■ All VoIP payload traffic is placed in a queue.

■ All other traffic is placed in another queue.

■ Give the VoIP traffic 50 percent of the bandwidth.

■ WFQ should be used on the non-VoIP traffic.

Table 15-5 CBWFQ Functions and Features (Continued)

CBWFQ Feature

Description

Maximum queue length

Varies based on router model and memory

Scheduling inside a single queue

FIFO on 63 queues; FIFO or WFQ on class-default queue*

Scheduling among all queues

Result of the scheduler provides a percentage of guaranteed bandwidth to each queue

*Cisco 7500 series routers support FIFO or WFQ in all the CBWFQ queues.

Figure 15-5 Network Used with CBWFQ and LLQ Configuration Examples

CBWFQ:

VoIP in One Queue

All Else in Another Queue

Note: All IP Addresses Begin 192.168

Client1

Server 1

Client1

1.100

1.100

SW1 R1 0s0/0

SW1 R1 0s0/0

E0/0 SW2

E0/0 SW2

Server 1

3.100

3.100

3.254

3.254

101 102

301 302

301 302

Example 15-4 CBWFQ with VoIP in One Queue, Everything Else in Class-Default

! The class map matches on UDP/RTP header and RTP port numbers, class-map match-all voip-rtp match ip rtp 16384 16383

! Next, the policy map uses the bandwidth command to reserve 64 kbps for the class ! voip-rtp. Class-default gets some of the leftover bandwidth by default, policy-map queue-voip class voip-rtp bandwidth 64 class class-default fair-queue

! The interface's bandwidth 128 command is used as the basis for the limit on the ! amount of bandwidth that can be allocated in the policy map queue-voip. ! The load-interval command sets how often counters are updated. Also, note ! that the policy-map is enabled for output; input is not allowed on routers for ! policy maps that perform queuing. interface Serial0/0 encapsulation frame-relay load-interval 30 bandwidth 128

service-policy output queue-voip

! This command lists counters, reserved bandwidth, maximum queue length (listed ! as max threshold), and a reminder that WFQ is used in the class-default queue. R3# show policy-map int s 0/0 Serial0/0

Service-policy output: queue-voip

Example 15-4 CBWFQ with VoIP in One Queue, Everything Else in Class-Default (Continued)

Class-map: voip-rtp (match-all) 136435 packets, 8731840 bytes

30 second offered rate 51000 bps, drop rate 0 bps Match: ip rtp 16384 16383 Weighted Fair Queueing

Output Queue: Conversation 265 Bandwidth 64 (kbps) Max Threshold 64 (packets) (pkts matched/bytes matched) 48550/3107200 (depth/total drops/no-buffer drops) 14/0/0

Class-map: class-default (match-any) 1958 packets, 1122560 bytes

30 second offered rate 59000 bps, drop rate 0 bps Match: any

Weighted Fair Queueing Flow Based Fair Queueing Maximum Number of Hashed Queues 256 (total queued/total drops/no-buffer drops) 15/0/0 ! This command just lists the configuration in a concise manner. R3# show policy-map Policy Map queue-voip Class voip-rtp

Weighted Fair Queueing

Bandwidth 64 (kbps) Max Threshold 64 (packets) Class class-default Weighted Fair Queueing Flow based Fair Queueing Max Threshold 64 (packets)

Defining and Limiting CBWFQ Bandwidth

Cisco IOS checks a CBWFQ policy map to ensure that it does not allocate too much bandwidth. IOS performs the check when the service-policy output command is added; if the policy map defines too much bandwidth for that interface, the service-policy command is rejected. IOS defines the allowed bandwidth based on two interface subcommands: the bandwidth command, and the reserved bandwidth implied by the max-reserved-bandwidth command (abbreviated hereafter as int-bw and max-res, respectively). The non-reservable bandwidth is meant for overhead traffic, much like CQ's system queue.

IOS allows a policy map to allocate bandwidth based on the product of int-bw and max-res. In other words, with a default max-res setting of 75 (75 percent), on an interface with int-bw of 256 (256 kbps), the policy map could allocate at most 192 kbps of bandwidth with its various bandwidth commands. Example 15-5 shows a simple example with a policy map that contains one class that has 64 kbps configured. The service-policy command is rejected on an interface whose bandwidth is set to 64 kbps.

Example 15-4 CBWFQ Rejected Due to Request for Too Much Bandwidth

! max-res was defaulted to 75, so only 75% of 64 kbps, or

48 kbps,

! is available. Note that the 48 kbps is mentioned in the

error message.

R3(config-cmap)# policy-map explicit-bw

R3(config-pmap)# class classl

R3(config-pmap-c)# bandwidth 64

R3(config-pmap-c)# int s 0/1

R3(config-if)# bandwidth 64

R3(config-if)# service-policy output explicit-bw

I/f Serial0/1 class classl requested bandwidth 64 (kbps),

available only 48 (kbps)

To overcome such problems, the engineer can simply pay attention to details and ensure that the policy map's configured bandwidth commands do not total more than max-res * int-bw. Alternatively, max-res can be defined to a higher number, up to a value of 100; however, Cisco does not recommend changing max-res.

The bandwidths can also be defined as percentages using either the bandwidth percent or bandwidth remaining percent command. By using percentages, it is easier to ensure that a policy map does not attempt to allocate too much bandwidth.

The two percentage-based bandwidth command options work in slightly different ways. Figure 15-6 shows the concept for each.

Figure 15-6 Bandwidth Percent and Bandwidth Remaining Percent Concepts

KEY POINT

Interface Bandwidth

100%

Unreservable Bandwidth

Reservable Bandwidth

Z Bandwidths <= max-res % Bandwidth Percent:

Percentages of int-bw

Bandwidth remaining percent:

Percentages of max-res * int-bw Z Bandwidths <= 100

The bandwidth percent bw-percent command sets a class's reserved bandwidth as a percentage of int-bw. For instance, in Example 15-4, if the bandwidth percent 50 command had been used instead of bandwidth 64, the voip-rtp class would have used 50% * 128 kbps, or 64 kbps. IOS checks all the bandwidth percent commands in a single policy map to ensure that the total does not exceed the max-res setting for the interface—in other words, with a default setting for max-res, all the bandwidth percent commands in a single policy map cannot total more than 75.

The bandwidth remaining percent bw-percent command sets a class's reserved bandwidth as a percentage of remaining bandwidth. Remaining bandwidth is the reservable bandwidth, calculated as int-bw * max-res. This method allows a policy map to allocate percentages that total 100 (100 percent). Using Example 15-4 again, the remaining bandwidth would be 75% * 128 kbps, or 96 kbps, and the command bandwidth remaining percent 50 would allocate 48 kbps for a class.

NOTE Using the bandwidth remaining percent command is particularly useful with LLQ, and will be explained in that context later in the chapter. The reason is that the remaining-bandwidth calculation is changed by the addition of LLQ.

Note that in a single policy map, only one of the three variations of the bandwidth command may be used. Table 15-7 summarizes the three methods for reserving bandwidth with CBWFQ.

Table 15-7 Reference for CBWFQ Bandwidth Reservation

KEY POINT

Table 15-7 Reference for CBWFQ Bandwidth Reservation

KEY POINT

Method

Amount of Bandwidth Reserved by the bandwidth Command

The Sum of Values in a Single Policy Map Must Be <= ...

Explicit bandwidth

As listed in commands

max-res * int-bw

Percent

A percentage of the int-bw

max-res setting

Remaining percent

A percentage of the reservable bandwidth (int-bw * max-res)

100

Low-Latency Queuing

Low-latency queuing sounds like the best queuing tool possible, just based on the name. What packet wouldn't want to experience low latency? As it turns out, for delay (latency) sensitive traffic, LLQ is indeed the queuing tool of choice. LLQ looks and acts just like CBWFQ in most regards, except it adds the capability for some queues to be configured as low-latency queues. LLQ schedules these specific queues as strict-priority queues, just like PQ schedules the high-priority queue. In other words, LLQ always services packets in these priority queues first.

LLQ lingo can sometimes be used in a couple of different ways. With a single policy map that has at least one low-latency queue, the policy map might be considered to be implementing LLQ, while at the same time, that one low-latency queue is often called "the LLQ." Sometimes, a single low-latency queue is even called "the PQ" as a reference to the PQ-like behavior, or even a "priority queue."

While LLQ adds a low-latency queue to CBWFQ, it also prevents the queue starvation that occurs with PQ. LLQ actually polices the PQ based on the configured bandwidth. In effect, the bandwidth given to an LLQ priority queue is both the guaranteed minimum and policed maximum. (You may recall from Chapter 14, "Classification and Marking," that the DiffServ Expedited Forwarding PHB formally defines the priority queuing and policing PHBs.) As a result, the packets that make it out of the queue experience low latency, but some may be discarded to prevent starving the other queues.

Figure 15-7 depicts the scheduler logic for LLQ. Note that the PQ logic is shown, but with the policer check as well.

Figure 15-7 LLQ Scheduler Logic

Figure 15-7 depicts the scheduler logic for LLQ. Note that the PQ logic is shown, but with the policer check as well.

LLQ configuration requires one more command in addition to the commands used for CBWFQ configuration. Instead of using the bandwidth command on a class, use the priority command:

priority {bandwidth-kbps | percent percentage} [burst]

This class subcommand enables LLQ in the class, reserves bandwidth, and enables the policing function. You can also configure the burst size for the policer with this command, but the default setting of 20 percent of the configured bandwidth is typically a reasonable choice.

Example 15-5 shows a sample LLQ configuration, using the following criteria. Like Example 15-4, the LLQ policy is applied to R3's s0/0 interface from Figure 15-5.

■ Packets will already have been marked with good DSCP values.

■ VoIP payload is already marked DSCP EF, and should be LLQed with 58 kbps of bandwidth.

■ AF41, AF21, and AF23 traffic should get 22, 20, and 8 kbps, respectively.

■ All other traffic should be placed into class class-default, which should use WRED and WFQ.

Example 15-5 LLQ for EF, CBWFQ for AF41, AF21, AF23, and All Else

! The class maps used by the queue-on-dscp are not shown, but the names imply what ! each class map has been configured to match. Note the priority 58 command makes ! class dscp-ef an LLQ. policy-map queue-on-dscp class dscp-ef priority 58 class dscp-af41

bandwidth 22 class dscp-af21 bandwidth 20 random-detect dscp-based class dscp-af23 bandwidth 8

random-detect dscp-based class class-default fair-queue random-detect dscp-based ! max-res has to be raised or the policy map would be rejected, interface Serial0/0 bandwidth 128 encapsulation frame-relay load-interval 30 max-reserved-bandwidth 85 service-policy output queue-on-dscp

! Below, for class dscp-ef, note the phrase "strict priority," as well as the ! computed policing burst of 1450 bytes (20% of 58 kbps and divided by 8 to convert ! the value to a number of bytes.) R3# show policy-map queue-on-dscp Policy Map queue-on-dscp Class dscp-ef

Weighted Fair Queueing Strict Priority

Bandwidth 58 (kbps) Burst 1450 (Bytes) ! lines omitted for brevity

! Note the statistics below. Any packets dropped due to the policer would show ! up in the last line below.

R3# show policy-map interface s 0/0 output class dscp-ef

Serial0/0

Service-policy output: queue-on-dscp Class-map: dscp-ef (match-all) 227428 packets, 14555392 bytes

30 second offered rate 52000 bps, drop rate 0 bps Match: ip dscp ef Weighted Fair Queueing Strict Priority Output Queue: Conversation 40 Bandwidth 58 (kbps) Burst 1450 (Bytes) (pkts matched/bytes matched) 12194/780416 (total drops/bytes drops) 0/0

Defining and Limiting LLQ Bandwidth

The LLQ priority command provides two syntax options for defining the bandwidth of an LLQ—a simple explicit amount, or bandwidth as a percentage of interface bandwidth. (There is no remaining bandwidth equivalent for the priority command.) However, unlike the bandwidth command, both the explicit and percentage versions of the priority command can be used inside the same policy map.

IOS still limits the amount of bandwidth in an LLQ policy map, with the actual bandwidth from both LLQ classes (with priority commands) and non-LLQ classes (with bandwidth commands) not being allowed to exceed max-res * int-bw. Although the math is easy, the details can get confusing, especially because a single policy map could have one queue configured with priority bw, another with priority percent bw, and others with one of the three versions of the bandwidth command. Figure 15-8 shows an example with three versions of the commands.

The figure shows both versions of the priority command. Class3 has an explicit priority 32 command, which reserves 32 kbps. Class2 has a priority percent 25 command, which, when applied to the interface bandwidth (256 kbps), gives class2 64 kbps.

Figure 15-8 Priority, Priority Percent, and Bandwidth Remaining Percent

KEY Interface Bandwidth

POINT

256 kbps

Unreservable Bandwidth (256 - (.75 * 256)) = 64 kbps

192 kbps

Reservable Bandwidth

Class class1 Priority 32

Class class2 Priority percent 25

Class class3

Bandwidth remaining percent 75

Unallocated: 92 kbps - 69 kbps = 23 kbps

Remaining Bandwidth Calculation

- 32 kbps (Explicit)

92 kbps

Remaining Bandwidth

The most interesting part of Figure 15-8 is how IOS views the remaining-bandwidth concept when priority queues are configured. IOS subtracts the bandwidth reserved by the priority commands as well. As a result, a policy map can essentially allocate non-priority classes based on percentages of the leftover (remaining) bandwidth, with those values totaling 100 (100 percent).

LLQ with More Than One Priority Queue

LLQ allows multiple queues/classes to be configured as priority queues. This begs the question, "Which queue gets scheduled first?" As it turns out, LLQ actually places the packets from multiple LLQs into a single internal LLQ. So, packets in the different configured priority queues still get scheduled ahead of non-priority queues, but they are serviced based on their arrival time for all packets in any of the priority queues.

KEY So why use multiple priority queues? The answer is policing. By policing traffic in one class at po|NT one speed, and traffic in another class at another speed, you get more granularity for the policing function of LLQ. For instance, if planning for video and voice, you can place each into a separate LLQ and get low latency performance for both types of traffic, but at the same time prevent video traffic from consuming the bandwidth engineered for voice, and vice versa.

Miscellaneous CBWFQ/LLQ Topics

CBWFQ and LLQ allow a policy map to either allocate bandwidth to the class-default class, or not. When a bandwidth command is configured under class class-default, the class is indeed reserved that minimum bandwidth. (IOS will not allow the priority command in class-default.) When class class-default does not have a bandwidth command, IOS internally allocates any unassigned bandwidth among all classes. As a result, class class-default may not get much bandwidth unless the class is configured a minimum amount of bandwidth using the bandwidth command.

This chapter's coverage of guaranteed bandwidth allocation is based on the configuration commands. In practice, a policy map may not have packets in all queues at the same time. In that case, the queues get more than their reserved bandwidth. IOS allocates the extra bandwidth proportionally to each active class's bandwidth reservation.

Finally, IOS uses queuing only when congestion occurs. IOS considers congestion to be occurring when the hardware queue is full; that generally happens when the offered load of traffic is far less than the clock rate of the link. So, a router could have a service-policy out command on an interface, with LLQ configured, but the LLQ logic would only be used when the hardware queue is full.

Was this article helpful?

+1 0

Responses

  • caramella
    How llq prevents the highpriority queue from starving other queues?
    6 months ago

Post a comment