Latency

In contrast to broadcast-type media transmission (such as RealAudio), a two-way phone conversation over the PSTN is quite sensitive to latency. Most PSTN users notice round-trip delays when exceeding 250 milliseconds (ms), loosely calculated to a one-way latency budget of 150 ms.

In the ITU-T G.114 recommendation, 150 ms is specified as the maximum preferred one-way latency to achieve high-quality voice. Beyond that round-trip latency, users start feeling uneasy holding a two-way conversation and usually end up talking over each other.

At 500 ms round-trip delays and beyond, phone calls are impractical. In comparison, the typical delay when speaking through a geo-stationary satellite is 150 to 500 ms.

Data networks are not affected by delay because the receiving network device reassembles out of order packets, in accordance with sequence numbers, such as TCP sequencing. An additional delay of 200 ms on an e-mail or Web page mostly goes unnoticed. However, when sharing the same network, voice callers notice this delay. When considering the one-way delay of voice traffic, you must take into account the delay added by the different network segments and processes.

Components in the delay budget are separated into fixed and variable delay. The backbone transmission delay is a constant, dictated by the distance, whereas networking conditions add a variable delay to the total delay budget.

The significant components of latency are as follows:

• Backbone (network) latency— Delay incurred when traversing the VoIP backbone. To minimize this delay, it is best to try to minimize the router hops between endpoints. Network service providers can sometimes provide an end-to-end delay limit over their managed backbones. Some network service providers negotiate with customers to identify and provision priority connections for voice traffic than for delay-insensitive data.

NOTE

The traceroute utility can run from either the Cisco IOS CLI or from any client-PC command prompt to determine the number of hops between endpoints.

• Codec latency— Compression algorithms have some built-in delay costs, such as G.723 adding a fixed 30 ms delay. When this additional overhead is added to the total delay budget, it is possible to end up paying 32 to 35 ms for passing through the gateway. Different codec implementations can reduce the latency; however, this reduced latency might be at the cost of reduced quality or greater bandwidth consumption.

• Jitter buffer depth— Compensating for fluctuating network conditions, many vendors implement a jitter buffer in their voice gateway product suites. The jitter buffer is a packet buffer that holds incoming packets for a specified amount of time before forwarding them to the codec for decompression. Jitter buffers have the effect of smoothing the packet flow, increasing the resiliency of the codec to packet loss, delayed packets, and other transmission effects. Using the jitter buffer has one disadvantage: It can add significant delay. The jitter buffer size is configurable and can be optimized for given network conditions. The jitter buffer size is usually set to be an integral multiple of the expected packet inter-arrival time to buffer an integral number of packets.

Was this article helpful?

0 0

Post a comment