Link Fragmentation and Interleaving

Both types of QoS tools covered in this chapter address bandwidth constraints to some degree. Compression tools directly attack bandwidth constraints by lowering the bandwidth required to forward packets. Link fragmentation and interleaving (LFI) tools directly lower delay by defeating a side effect of a small transmit clock speed, namely serialization delay.

A quick review of serialization delay should help you make more sense out of LFI tools. Serialization is the time required to send a frame over a physical link. If a link has a physical clock rate of x bps, it takes 1/x seconds to send a single bit. If a frame has y bits in it, it takes y/x seconds to serialize the frame. The faster the link, the lower the serialization delay. On a 56-kbps link, for example, it takes 1/56,000 of a second to send 1 bit. A 1500-byte frame (12,000 bits) takes 12,000/56,000 seconds to serialize, or roughly 214 ms.

When a router starts to send a frame out of an interface, it sends the complete frame. If a small, delay-sensitive frame needs to exit an interface, and the router has just begun to send a large frame, the small frame must wait until the whole large frame has been sent before the router will send the small, delay-sensitive frame. As seen in the preceding example, a 1500-byte frame takes 214 ms to serialize at 56 kbps, which is far too long for the small frame to wait if it is part of a VoIP stream.

LFI tools attack the serialization delay problem by ensuring that large packets do not delay smaller packets. It accomplishes this by dividing larger packets (fragmentation) and interleaving later-arriving smaller packets with the fragments from the larger packet. The smaller, delay-sensitive interleaved packets, typically VoIP, are defined in your QoS policy. Figure 7-5 outlines the basic process.

Figure 7-5 Basic Concept Behind LFI Tools

Interface Output Queue, no LFI

Delay Sensitive 60 Byte Packet

1500 Byte Packet

Interface Output Queue, with LFI, 300 Byte Fragments

300 Byte Fragment #5 of Original

300 Byte

300 Byte

Fragment

Fragment

#4 of

#3 of

Original

Original

300 Byte Fragment #2 of Original

300 Byte Fragment #2 of Original

Delay Sensitive 60 Byte Packet

300 Byte

Fragment

#1 of

Original

As shown in the upper queue in the figure, without LFI, the small 60-byte packet must wait for the full 1500-byte packet to be forwarded. In the lower queue, with LFI enabled, IOS can choose to let the smaller packet exit the interface ahead of some of the fragments of the larger packet.

Before examining LFI in more detail, you need to take a closer look at the terms "packet" and "frame." In most cases in this book, these terms have been used interchangeably. However, it is important to realize what really gets placed into the queues, and what really gets fragmented, when discussing LFI tools.

First, we need a shared definition of what each of the two terms mean. Packet refers to the entity that flows through the network, including the Layer 3 header, all headers from layers above Layer 3, and the end-user data. Packets do not include the data-link (Layer 2) headers and trailers. Frames include the packet, as well as the data-link (Layer 2) header and trailer.

Queuing tools actually place frames into the queues. For instance, Weighted Fair Queuing (WFQ) on a PPP serial interface places PPP frames into the queues. Concerning queuing tools, the distinction does not really have much bearing on the choices you make. In addition, because most people tend to use the term "packet" more often, this book just uses packet when it does not matter whether you care about the packet or the frame.

LFI tools require you to think about what happens to the packet, and what happens to the frame. Consider Figure 7-6, which shows some of the details of an unfragmented frame, and a fragmented frame, using Frame Relay.

Figure 7-6 LFI Application to Packets and Frames, 1500-Byte Packet

Unfragmented 1509 Byte Frame

FR Header

1500 byte packet

FR Trailer

(6 bytes)

(3 bytes)

Fragmented 511 Byte Frame

/

x'

\

\

\

FR Header

500 byte

FR Trailer

FR Header

500 byte

FR Trailer

FR Header

500 byte

FR Trailer

(8 bytes)

packet

(3 bytes)

(8 bytes)

packet

(3 bytes)

(8 bytes)

packet

(3 bytes)

Fragmented 511 Byte Frame

Fragment Size 511

Fragmented

511 Byte Frame

Fragment Size 511

Fragmented

511 Byte Frame

In the upper part of the figure, a 1500-byte packet has an extra 9 bytes of Frame Relay header and trailer added to it, to form a 1509-byte frame. In the lower part of the figure, the 1500-byte packet has been fragmented into three 500-byte fragments, and then placed into Frame Relay frames. It turns out that with FRF.12 LFI, an additional 2 bytes of header are needed to manage the fragments, so each of the three frames totals 511 bytes in length.

Technically, the fragment size used in the figure is 511 bytes, not 500. Most people would tend to think something like "the router fragmented the 1500-byte packet into three 500-byte fragments." In reality, the router performs logic like in the following list:

• The router fragments the packet into smaller pieces.

• The router adds the appropriate data-link headers and trailers, including any headers specifically needed for fragmentation support.

• The length of the resulting frames (including data-link headers/trailers) does not exceed the fragmentation size configured.

• The router adds these frames to the appropriate queue.

So, the router fragments packets into smaller pieces, but the size of the pieces is determined by the fragment size, which is based on the frame size. Therefore, does LFI really fragment packets, or frames? Frankly, either term works. When you are choosing the size of the fragments, however, always remember that the fragment size determines the size of the frames, not the packets. Therefore, you should consider the length of the data-link headers and trailers when choosing the size of the fragments.

Multilink PPP LFI

The core concept behind LFI, and its benefits, is very straightforward. The details, however, can be a little confusing, mainly because IOS LFI tools interact directly with IOS queuing tools. In addition, the two LFI tools covered on the Cisco QoS exams happen to behave differently as to how they interact with queuing tools. So to understand where LFI functions take place, you need to examine each tool specifically. This section covers multilink PPP LFI (MLP LFI), with Frame Relay fragmentation (FRF) covered in the next section of this chapter.

Figure 7-7 depicts how MLP LFI works with a queuing tool on an interface.

Figure 7-7 MLP LFI Interaction with Queuing

1500 Byte Packet Arrives, Followed by One

Figure 7-7 MLP LFI Interaction with Queuing

1500 Byte Packet Arrives, Followed by One

Disruptive

The figure outlines a lot of the detailed concepts behind LFI. In this example, a 1500-byte packet first arrives at R1, followed by a 60-byte packet. The fragmentation logic has been configured to fragment the frames down to a little more than 300 bytes, to make room for 300 bytes from the packet, and a little more for the data-link headers and trailers. After fragmentation, the queuing tool on the interface classifies the frames into their respective queues, which in this example happens to be two different queues. (The queuing tool's classification step works exactly as described in chapter 4, "Congestion Management.")

Now look to the far right side of the figure. The TX Queue is shown, with a queue length of 2. In this example, an assumption has been made that the small packet arrived after IOS had placed the first two fragments of the large packet into the two available slots in the TX Queue, with the last three fragments being placed into Queue 2. The TX Queue is always absolutely a single FIFO queue, as described in Chapter 4. In other words, the small packet does not interrupt the router while it is in the middle of sending fragment 1, nor does the small packet have a chance to be sent before fragment 2, because fragment 2 is already in the TX Queue. The best behavior the small packet can hope for is to be the next packet placed onto the end of the TX Queue. Therefore, for now, the small packet has been placed into Queue 1.

Now look just to the left of the TX Queue, between the two interface output queues and the TX Queue. The term "schedule" reminds us that the queuing scheduler chooses the next packet to be moved from the output queues to the TX Queue (as described in Chapter 4). The queuing tool's scheduler may decide to take the next packet from Queue 1 or Queue 2—a decision totally based on the logic of the queuing tool.

Interleaving occurs when the queuing scheduler decides to service the queue that holds the small packet next, rather than the queue holding the next fragment of the large packet. If Low Latency Queuing (LLQ) has been configured, and Queue 1 is the low-latency queue, the scheduler takes the small packet next, meaning that the small packet would be interleaved between fragments of the larger packet. If the queuing tool was Custom Queuing (CQ), and the queuing scheduler were able to send more bytes from Queue 2 in this cycle, fragment 3 would be sent next.

Maximum Serialization Delay and Optimum Fragment Sizes

How large should the fragments be to reduce serialization delay to an acceptable level? Well, the real answer lies in an analysis of the delay budgets for your network. From that analysis, you determine the maximum serialization delay you can have on each link.

The delay budget includes many delay components, such as queuing delay, propagation delay, shaping delay, network delay, and serialization delay. Based on that delay budget, you determine how much serialization delay you can afford on a particular link. Figure 7-8 depicts example delay values for various delay components.

Figure 7-8 Review of Delay Components, Including Serialization Delay

Server 1

Figure 7-8 Review of Delay Components, Including Serialization Delay

Server 1

Transporte Electricidad

Delays for Packets Flowing Left-to-Right: Total Delay: 95 ms

Delays for Packets Flowing Left-to-Right: Total Delay: 95 ms

Now imagine that you need to configure R1 in the figure to use MLP LFI. You already know that you want a maximum serialization delay of 10 ms, and conveniently, MLP LFI enables you to configure a max-delay parameter. MLP LFI then calculates the fragment size, based on the following formula:

Max-delay * bandwidth

In this formula, bandwidth is the value configured on the bandwidth interface subcommand, and max-delay is the serialization delay configured on the ppp multilink fragment-delay command. For instance, R1 in Figure 7-8 shows a budget for 10 ms of serialization delay. On a 56-kbps link, a 10-ms max-delay would make the fragment size 56,000 * .01, or 560 bits, which is 70 bytes.

Cisco generally suggests a maximum serialization delay per link of 10 ms in multiservice networks. Because serialization delay becomes less than 10 ms for 1500-byte packets at link speeds greater than 768 kbps, Cisco recommends that LFI be considered on links with a 768-kbps clock rate and below.

The math used to find the fragment size, based on the serialization delay and bandwidth, is pretty easy. For perspective, Table 7-8 summarizes the calculated fragment sizes based on the bandwidth and maximum delay.

Table 7-8 Fragment Sizes Based on Bandwidth and Serialization Delay

Bandwidth/Link Speed

10-ms Delay

20-ms Delay

30-ms Delay

40-ms Delay

56 kbps

70

140

210

280

64 kbps

80

160

240

320

128 kbps

160

320

480

560

256 kbps

320

640

960

1280

512 kbps

640

1280

1920*

2560*

768 kbps

1000

2000*

3000*

4000*

1536 kbps

1600*

3200*

4800*

6400*

Values over 1500 exceed the typical maximum transmit unit (MTU) size of an interface. Fragmentation of sizes larger than MTU does not result in any fragmentation.

Values over 1500 exceed the typical maximum transmit unit (MTU) size of an interface. Fragmentation of sizes larger than MTU does not result in any fragmentation.

Advance SEO Techniques

Advance SEO Techniques

Turbocharge Your Traffic And Profits On Auto-Pilot. Would you like to watch visitors flood into your websites by the 1,000s, without expensive advertising or promotions? The fact is, there ARE people with websites doing exactly that right now. How is that possible, you ask? The answer is Advanced SEO Techniques.

Get My Free Ebook


Post a comment