Cisco Propagation Delay Serialization Delay

All packets in a network experience some delay between when the packet is first sent and when it arrives at its destination. Most of the concepts behind QoS mechanisms relate in some way to delay. Therefore, a deeper look into delay is useful. Take a look at Figure 1-5; this sample network is used often in this book.

Figure 1-5 Sample Network for Discussion ofDelay

Server 1

Figure 1-5 Sample Network for Discussion ofDelay

Server 1

At what points will delay occur in this network? Well, at all points, in actuality. At some points in the network, the delay is so small that it can just be ignored for practical purposes. In other cases, the delay is significant, but there is nothing you can do about it! For a fuller understanding, consider the following types of delay:

• Serialization delay (fixed)

• Propagation delay (fixed)

• Queuing delay (variable)

• Forwarding/processing delay (variable)

• Shaping delay (variable)

• Network delay (variable)

• Compression delay (variable)

Each of these types of delay is explained over the next several pages. Together, the types of delay make up the components of the end-to-end delay experienced by a packet.

Serialization Delay

Imagine you are standing at a train station. A train comes by but doesn't stop; it just keeps going. Because the train cars are connected serially one to another, a time lag occurs between when the engine car at the front of the train first gets to this station and when the last car passes by. If the train is long, it takes more time until the train fully passes. If the train is moving slowly, it takes longer for all the cars to pass. In networking, serialization delay is similar to the delay between the first and last cars in a train.

Serialization delay defines the time it takes to encode the bits of a packet onto the physical interface. If the link is fast, the bits can be encoded onto the link more quickly; if the link is slow, it takes longer to encode the bits on the link. Likewise, if the packet is short, it does not take as long to put the bits on the link as compared with a long packet.

Use the following formula to calculate serialization delay for a packet:

#bits sent Link speed

Suppose, for instance, that Hannah send a 125-byte packet to Serverl. Hannah sends the packet over the Fast Ethernet to the switch. The 125 bytes equal 1000 bits, so at Fast Ethernet speeds, it takes 1000 bits/100,000,000 bits per second (bps), or .01 ms, to serialize the packet onto the Fast Ethernet. Another .01 ms of serialization delay is experienced when the switch sends the frame to R1. (I ignored the data-link header lengths to keep the math obvious.)

Next, when that same packet leaves R1 over a 56 kbps link to R2, serialization takes 1000 bits/ 56,000 bps, or 17.85 ms. The serialization component over Fast Ethernet is insignificant, whereas serialization becomes a more significant number on lower-speed serial links. Figure 1-6 shows the various locations where the packet from Hannah to Server1 experiences serialization delay.

Figure 1-6 Serialization Delay

Server 1

Figure 1-6 Serialization Delay

Server 1

As Figure 1-6 shows, serialization delay occurs any time a frame is sent. On LAN links, the delay is insignificant for most applications. At steps 3 through 6 in the figure, the serialization delay is 17.85 ms, 7.8 ms, .02 ms, and .65 ms for the 125-byte packet, respectively. Also note that serialization delays do occur inside the Frame Relay cloud. (You can read more about delays inside the cloud in the "Network Delay" section later in this chapter.)

Table 1-4 lists the serialization delay for a couple of frame sizes and link speeds.

Table 1-4 Example Serialization Delay Values

Table 1-4 lists the serialization delay for a couple of frame sizes and link speeds.

Table 1-4 Example Serialization Delay Values

Serialization Delay

Serialization Delay

Clock Rate

(125-Byte Frame;

(1500-Byte Frame;

of Link

Milliseconds)

Milliseconds)

100 Mbps

.01

.12

1.544 Mbps

.65

8

512 kbps

2

24

128 kbps

7.8

93

56 kbps

17.85

Imagine you are watching a train again, this time from a helicopter high in the air over the tracks. You see the train leaving one station, and then arriving at the second station. Using a stopwatch, you measure the amount of time it takes from the first car leaving the first station until the first car arrives at the second station. Of course, all the other cars take the same amount of time to get there as well. This delay is similar to propagation delay in networking.

Propagation delay defines the time it takes a single bit to get from one end of the link to the other. When an electrical or optical signal is placed onto the cable, the energy does not propagate to the other end of the cable instantaneously—some delay occurs. The speed of energy on electrical and optical interfaces approaches the speed of light, and the network engineer cannot override the laws of physics! The only variable that affects the propagation delay is the length of the link. Use the following formula to calculate propagation delay:

Length of Link (meters)

3.0 x 10 meters/second or

Length of Link (meters)

2.1 x 10 meters/second where 3.0 * 108 is the speed of light in a vacuum. Many people use 2.1 * 108 for the speed of light over copper and optical media when a more exact measurement is needed. (Seventy percent of the speed of light is the generally accepted rule for the speed of energy over electrical cabling.)

Propagation delay occurs as the bits traverse the physical link. Suppose, for instance, that the point-to-point link between R1 and R2 is 1000 kilometers (1,000,000 meters) long. The propagation delay would be as follows:

Figure 1-7 shows two contrasting examples of serialization and propagation delay.

Figure 1-7 Serialization and Propagation Delay for Selected Packet and Link Lengths

Figure 1-7 Serialization and Propagation Delay for Selected Packet and Link Lengths

As you can see in Figure 1-7, the length of the link affects propagation delay, whereas the size of the packet and link speed affect serialization delay. The serialization delay is larger for larger packets, but the propagation delay is equal for different-sized packets, on the same link. One common misconception is that the link speed, or clock rate, affects propagation delay—it does not! Table 1-5 lists the various propagation delays and serialization delays for parts of Figure 1-6.

Table 1-5 Example Serialization and Propagation Delays with Figure 1-6

Step Number from Figure

Length of Link

Clock Rate of Link

Propagation Delay

(Milliseconds)

Serialization delay (125-Byte Packet; Milliseconds)

Serialization Delay (1500-Byte Packet Milliseconds)

1

50 m

100 Mbps

.002

.01

.12

2

10 m

100 Mbps

.0004

.01

.12

3

1000 km

56 kbps

4.8

17.85

214

Table 1-5 Example Serialization and Propagation Delays with Figure 1-6 (Continued)

Step Number from Figure

Length of Link

Clock Rate of Link

Propagation Delay

(Milliseconds)

Serialization delay (125-Byte Packet; Milliseconds)

Serialization Delay (1500-Byte Packet; Milliseconds)

4

5 km

128 kbps

.024

7.8

94

5

1000 km

44.232 Mbps

4.8

.02

.24

6

10 km

1.544 Mbps

.048

.65

7.8

If the link from Hannah to SW1 is 100 meters, for example, propagation is 100/(2.1 * 108), or .48 microseconds. If the T3 between the two Frame Relay switches is 1000 kilometers, the delay is 1,000,000/(2.1 * 108), or 4.8 ms. Notice that propagation delay is not affected by clock rate on the link—even on the 56-kbps Frame Relay access link, at 1000 km (a long Frame Relay access link!), the propagation delay would only be 4.8 ms.

Queuing Delay

Packets experience queuing delay when they must wait for other packets to be sent. Most people think of queuing delay when they think of QoS, and most people think of queuing strategies and tools when they think of QoS tools—but queuing tools are just one category of QoS tool. Queuing delay consists of the time spent in the queues inside the device—typically just in output queues in a router, because input queuing is typically negligible in a router. However, the queuing time can be relatively large—hundreds of milliseconds, or maybe even more. Consider Figure 1-8, where R1 queues four 1500-byte packets that Hannah sent to Serverl.

Figure 1-8 R1 Queues Four 1500-Byte Packets for Transmission

4 X 1500 Byte Packets

FIFO Output Queue

56 kbps Link

Because it takes 1500 * 8 / 56,000, or 214 ms, to serialize each 1500-byte packet, the other packets need to either be stored in memory or discarded. Therefore, the router uses some memory to hold the packets. The simplest form of queuing is to use a single queue, serviced with first-in, first-out (FIFO) logic—as is shown in the figure. After 856 ms, all four packets would have been sent out the serial link. Assuming that the link was not busy when Hannah sent these four packets, how much queuing delay did each packet experience? Well, the first packet experienced no queuing delay. The second packet waited on the first, or 214 ms. The third packet waited on the first two—or 428 ms. And the fourth packet waited on the first three, for a total of 642 ms.

Queuing provides a useful function, because the second, third, and fourth packets would have had to have been discarded without queuing. However, too much of a good thing is not always good! Imagine that Hannah sends 100 * 1500-byte packets all at once. If the queue in R1 is large enough, R1 could queue all 100 packets. What would the delay be for the one-hundredth packet? Well, 99 * 214 ms per packet, or roughly 21 seconds! If Hannah uses TCP, then TCP has probably timed out, and re-sent the packets—causing more congestion and queuing delay. And what about another user's packet that showed up right after Hannah's 100 packets? Still more delay. So, some queuing helps prevent packet drops, but large queues can cause too much delay.

Figure 1-9 combines all the delay components covered so far into one small diagram. Consider the delay for the fourth of the four 1500-byte packets sent by Hannah. The figure lists the queuing, serialization, and propagation delays.

Figure 1-9 Delay Components: Three Components, Single Router (R1)

4 X 1500 Byte Packets

FIFO Output Queue

Serialization Delay: 214 ms

1001101110101011

Propagation Delay: 4.8 ms

4th Packet 642 ms Delay

The overall delay for a packet is the sum of all these delays from end to end. At R1, when all four packets have been received, the fourth packet experiences a total of about 860 ms of delay before it has been fully received at R2. And this example just shows the queuing delay in a single router (R1), and the serialization and propagation delay over a single link—end-to-end delay includes these delays at each router (queuing) and link (serialization and propagation) in the network.

Forwarding Delay

The term "forwarding delay" refers to the time taken to switch the packet inside the router or switch—simply put, the time required to forward the packet. This does not include all the time taken inside the router or switch; a more accurate view is to think of it as the time taken between examining the frame/packet on the input interface, and placing it in the output queue on the output interface. In other words, it does not include a queuing delay. It does include all the processing required between the time that a frame has been received in its entirety until the frame has either been queued on the output interface or has begun transmission on the output interface.

Cisco does not normally quote statistics about forwarding delay numbers for different models of routers with different types of internal processing. However, the higher volume of packets that a router can forward, and the higher volume of packets forwarded using a particular processing method, presumably the lower the forwarding delay.

Most delay components in LAN switches are small enough not to matter. However, switches incur forwarding delay, just like routers—most of the time. Some LAN switches use a "store-and-forward" forwarding logic, when the entire frame must be received before forwarding any part of the frame. However, some switches use cut-through or fragment-free forwarding, which means that the first bits of a frame are forwarded before the final bits are fully received. Technically, if you define forwarding delay as the time between receipt of the entire frame until that frame is queued for transmission, some LAN switches might actually have negative forwarding delay! It just depends on how you decide to define what parts of the overall delay end up being attributed. Forwarding delay is typically a small enough component to ignore in overall delay budget calculations, so this book does not punish you with further discussion about these details!

For more information on internal processing methods such as Cisco Express Forwarding (CEF), you can review the Cisco Press book Inside Cisco IOS Software Architecture.

Shaping Delay

Traffic shaping causes additional delays by serving queues more slowly than if traffic shaping were not used. Why should a router slow down sending packets if it does not have to? Well, traffic shaping helps match the overall forwarding rate of traffic when a carrier might discard traffic if the rates exceed the contracted rate. So, which is better?

• Sending packets really fast and having them be dropped

• Sending packets more slowly, but not having them be dropped

The right answer is—it depends! If you want to send more slowly, hoping that packets are not dropped, however, traffic shaping is the solution.

Carriers can drop frames and packets inside their network for a variety of reasons. One of the most typical reasons is that most central-site routers use a fast access link, with remote sites using much slower links. If the central site uses a T1, and the remote site uses a 56-kbps link, frames may fill the queue inside the service provider's network, waiting to go across the 56-kbps access link. Many other events can cause the carrier to drop packets; these reasons events explained more fully in Chapter 5, "Traffic Policing and Shaping."

To understand the basic ideas behind shaping in a single router, consider Figure 1-10, where R2 has a 128-kbps access rate and a 64-kbps CIR on its VC to R3.

Figure 1-10 Traffic Shaping over the Frame Relay Network

4 X 1500 Byte Packets from Local

LAN User

AR: 128 kbps CIR: 64 kbps Bc: 12,000 bits

AR: 128 kbps CIR: 64 kbps Bc: 12,000 bits

Suppose that the Frame Relay provider agrees to the 64-kbps CIR on the VC from R2 to R3, but the carrier tells you that they aggressively discard frames when you send more than 64 kbps. The access rate is 128 kbps. Therefore, you decide to shape, which means that R2 will want to average sending at 64 kbps, because sending faster than 64 kbps hurts more than it helps. In fact, in this particular instance, if R2 will send packets for this VC only half the time, the rate averages out to 64 kbps. Remember, bits can only be sent at the physical link speed, which is also called the access rate in Frame Relay. In effect, the router sends all packets at access rate, but the router purposefully delays sending packets, possibly even leaving the link idle, so that the rate over time averages to be about 64 kbps.

Chapter 5 will clear up the details. The key concept to keep in mind when reading other sections of this book is that traffic shaping introduces additional delay. Like many QoS features, shaping attempts to enhance one particular traffic characteristic (drops), but must sacrifice another traffic characteristic (delay) to do so.

NOTE To QoS test takers: Neither the QoS nor DQoS course books list shaping delay as a delay component. Because it does affect delay, however, it is included here.

Network Delay

Most people draw a big cloud for a Frame Relay or ATM network, because the details are not typically divulged to the customer. However, the same types of delay components seen outside the cloud also exist inside the cloud—and the engineer that owns the routers and switches outside the cloud cannot exercise as much QoS control over the behavior of the devices in the cloud.

So how much delay should a packet experience in the cloud? Well, it will vary. The carrier might commit to a maximum delay value as well. However, with a little insight, you can get a solid understanding of the minimum delay a packet should experience through a Frame Relay cloud. Consider Figure 1-11, focusing on the Frame Relay components.

Figure 1-11 Frame Relay Network: Propagation and Serialization Delay Components

Server 1

Figure 1-11 Frame Relay Network: Propagation and Serialization Delay Components

Server 1

The propagation delay and serialization delay can be guessed pretty closely. No matter how many switches exist between R2 and R3, the cumulative propagation delays on all the links between R2 and R3 will be at least as much as the propagation delay on a point-to-point circuit. And with most large providers, because they have many points of presence (PoPs), the Frame Relay VC probably takes the same physical route as a point-to-point circuit would anyway. As for serialization delay, the two slowest links, by far, will be the two access links (in most cases). Therefore, the following account for most of the serialization delay in the cloud:

• The serialization delay to send the packet into the cloud

• The serialization delay at the egress Frame Relay switch, sending the packet to R3

Suppose, for example, that R2 and R3 are 1000 km apart, and a 1500-byte packet is sent. The network delay will at least be the propagation delay plus both serialization delays on the two access links:

Serialization (ingress R2) = 1500 bytes * 8 / 128,000 bps = 94 ms

Serialization (egress R3) = 1500 bytes * 8 / 1,544,000 = 7.8 ms

For a total of 106.6 ms delay

Of course, the delay will vary—and will depend on the provider, the status of the network's links, and overall network congestion. In some cases, the provider will include delay limits in the contracted service-level agreement (SLA).

Queuing delay inside the cloud creates the most variability in network delay, just as it does outside the cloud. These delays are traffic dependent, and hard to predict.

NOTE To QoS test takers: Neither the QoS nor DQoS course books list network delay as a delay component. Because it does affect delay, however, it is included here.

Delay Summary

Of the types of delay covered so far in this chapter, all except shaping delay occur in every network. Shaping delay occurs only when shaping is enabled.

Two other delay components may or may not be found in a typical network. First, codec delay will be experienced by voice and video traffic. Codec delay is covered in more depth in the section titled "Voice Delay Considerations." Compression requires processing, and the time taken to process a packet to compress or decompress the packet introduces delay. Chapter 7, "Link-Efficiency Tools," covers compression delay.

Table 1-6 summarizes the delay components listed in this section.

Table 1-6 Components of Delay Not Specific to One Type of Traffic

Table 1-6 summarizes the delay components listed in this section.

Table 1-6 Components of Delay Not Specific to One Type of Traffic

Delay Component

Definition

Where It Occurs

Serialization delay (fixed)

Time taken to place all bits of a frame onto the physical medium. Function of frame size and physical link speed.

Outbound on every physical interface; typically negligible on T3 and faster links.

Propagation delay (fixed)

Time taken for a single bit to traverse the physical medium from one end to the other. Based on the speed of light over that medium, and the length of the link.

Every physical link. Typically negligible on LAN links and shorter WAN links.

continues continues

Table 1-6 Components of Delay Not Specific to One Type of Traffic (Continued)

Delay Component

Definition

Where It Occurs

Queuing delay (variable)

Time spent in a queue awaiting the opportunity to be forwarded (output queuing), or awaiting a chance to cross the switch fabric (input queuing).

Possible on every output interface. Input queuing unlikely in routers, more likely in LAN switches.

Forwarding or processing delay (variable)

Time required from receipt of the incoming frame, until the frame/packet has been queued for transmission.

On every piece of switching equipment, including routers, LAN switches, Frame Relay switches, and ATM switches.

Shaping delay (variable)

Shaping (if configured) delays transmission of packets to avoid packet loss in the middle of a Frame Relay or ATM network.

Anywhere that shaping is configured, which is most likely on a router, when sending packets to a Frame Relay or ATM network.

Network delay (variable)

Delays created by the components of the carrier's network when using a service. For instance, the delay of a Frame Relay frame as it traverses the Frame Relay network.

Inside the service provider's network.

QoS Tools That Affect Delay

Several QoS features can help with delay issues. You'll find more detail about each of these tools in various chapters throughout this book. For now, however, knowing what each class of QoS tool accomplishes will help you sift through some of the details.

The best QoS tool for delay issues is . . . more bandwidth—again! More bandwidth helps bandwidth-related problems, and it also helps delay-related problems. Faster bandwidth decreases serialization delay. Because packets exit more quickly, queuing delay decreases. Higher CIR on your VCs reduces shaping delay. In short, faster bandwidth reduces delay!

Unfortunately, more bandwidth does not solve all delay problems, even if you could afford more bandwidth! In fact, in converged networks (networks with voice, video, and data), adding more bandwidth might mask delay problems that are best solved through other QoS tools or through better QoS design. The sections that follow address the QoS tools can affect the delay a particular packet receives.

Queuing (Scheduling)

The most popular QoS tool, queuing, involves choosing the packets to be sent based on something other than arrival time. In other words, instead of FIFO queuing with one queue, other queuing mechanisms create multiple queues, place packets into these different queues, and then pick packets from the various queues. As a result, some packets leave the router more quickly, with other packets having to wait longer. Although queuing does not decrease delay for all packets, it can decrease delay for delay-sensitive packets, and increase delay for delay-insensitive packets—and enabling a queuing mechanism on a router does not cost any cash, whereas adding more bandwidth does.

Each queuing method defines some number of different queues, with different methods of scheduling the queues—in other words, different rules for how to choose from which queue the next packet to be sent will be chosen. Figure 1-12 depicts a queuing mechanism with two queues. Suppose Hannah sent four packets, but the fourth packet was sent by a videoconferencing package she was running, whereas the other three packets were for a web application she was using while bored with the video conference.

Figure 1-12 Sample Queuing Method: Two Queues

4 X 1500 Byte Packets

3

2

1

->■

4

R1 could notice that packet 4 has different characteristics, and place it into a different queue. Packet 4 could exit R1 before some or all of the first three packets.

Link Fragmentation and Interleaving

The time required to serialize a packet on a link is a function of the speed of the link, and the size of the packet. When the router decides to start sending the first bit of a packet, the router continues until the whole packet is sent. Therefore, if a delay-sensitive packet shows up just after a long packet has begun to be sent out an interface, the delay-sensitive packet must wait until the longer packet has been sent.

Suppose, for example, that two packets have arrived at R1. Packet 1 is 1500 bytes, and packet 2 is 200 bytes. The smaller packet is delay sensitive. Because packet 2 arrived just after the first bit of packet 1 was sent, packet 2 must wait 214 ms for packet 1 to be serialized onto the link. With link fragmentation and interleaving (LFI), packet 1 could be broken into three 500-byte fragments, and packet 2 could be interleaved (inserted) and sent on the link after the first of the three fragments of packet 1. Figure 1-13 depicts LFI operation.

Figure 1-13 Link Fragmentation and Interleaving

Packet 1: 1500 Bytes, Arrives

Packet 2: 200 Bytes, Delay Sensitive, Arrives Second

Output Queue 1: 3 Fragments of Packet #1 Shown

P1

P1

P1

F3

F2

Output Queue 2

Output Queue 1

P1

P1

2

P1

F3

F2

F1

Legend: Px Fy Means Packet Number x, Fragment Number y

Note that packet 1 was fragmented into three pieces. Because packet 2 arrived after packet 1 had begun to be sent, packet 2 had to wait. With LFI, packet 2 does not have to wait for the entire original packet, but rather it waits for just 1 fragment to be sent.

Advance SEO Techniques

Advance SEO Techniques

Turbocharge Your Traffic And Profits On Auto-Pilot. Would you like to watch visitors flood into your websites by the 1,000s, without expensive advertising or promotions? The fact is, there ARE people with websites doing exactly that right now. How is that possible, you ask? The answer is Advanced SEO Techniques.

Get My Free Ebook


Post a comment