Equal Cost Load Balancing

Equal-cost load balancing, as its name implies, is the balancing of a traffic load across redundant links of equal cost. Figure 10-3 illustrates this in a small network, and the contents of Router A's routing table are presented in Table 10-4. In this table, you can see that there are multiple ports that Router A can use to get to the gateway router. These ports are called S0 and S1, or serial ports 0 and 1.

Figure 10-3: Load balancing across equal links.

Figure 10-3: Load balancing across equal links.

Table 10-4: Router A Routing Table Contents

Router Name

Destination

Next Hop

Number of Hops

Cost Metric

A

Host 192.168.130.2

Local

0

500

Host 192.168.130.9

Local

0

500

Network 192.168.125

Gateway—Port S0

2

2500

Network 192.68.253

Gateway—Port S0

2

2350

Network 192.168.125

Gateway—Port S1

2

2500

Network 192.68.253

Gateway—Port S1

2

2350

In Figure 10-3, Router A has two, equal-cost, serial connections (both T1s) to the gateway router. Given that the two paths are of equal cost, IGRP should be indifferent between which of them is used. In practice, IGRP will split the traffic equally between these two paths. Balancing loads across equal-cost paths can be done in two different ways:

• Per destination

Balancing traffic on a per-packet basis means that sequential packets in a stream, bound for the same destination, may be sent out to different interfaces. This can result in data being received late, or out of sequence, which can cause application problems.

Balancing traffic on a per-destination basis means that all the packets in a stream that are bound for the same destination will be forwarded via the same route. This alleviates the potential for problems caused by per-packet load balancing but can result in a somewhat less than perfect distribution of traffic across equal-cost links.

IGRP will determine which of these approaches to use based on its capability to perform route caching. Route caching is a relatively simple technique for keeping a route cached in memory. Therefore, the route is already calculated and stored in memory before a packet is received. The router uses this cached route to ship the next inbound packet in a stream over the same route that the preceding one took. The obvious drawback to supporting route caching is that it can be memory intensive. If you disable route caching, load balancing will automatically be performed per packet.

Regardless of your preference for or against route caching, one of the most endearing aspects of multipath routing is the capability to automatically recover from network failures. In the event of a failure that impacts one of the paths in a multipath arrangement, traffic is automatically shunted over to the surviving paths. Figure 10-4 demonstrates the results of a link failure in the network shown in Figure 10-3.

In Figure 10-4, the dual T1 lines interconnecting Routers A and the gateway router were used to support communications in a round-robin fashion. Datagrams are alternated between the two available equal-cost paths. When one of the links fails, as illustrated in Figure 10-4, Router A would invalidate the failed link's routing entry and use the surviving link for all the traffic to that destination. Table 10-5 summarizes the effects of this occurrence on Router A's routing table.

Figure 10-4: Automatic recovery from a failed route.

Table 10-5: Router A Routing Table Contents After a Link Failure

Router Name

Destination

Next Hop

Number of Hops

Cost Metric

A

Host 192.168.130.2

Local

0

500

Host 192.168.130.9

Local

0

500

Network 192.168.125

Gateway—Port SO

2

2500

Network 192.68.253

Gateway—Port SO

2

2350

Network 192.168.125

Gateway—Port S1

2

Invalid route

Network 192.68.253

Gateway—Port S1

2

Invalid route

This recovery is automatic; users are not adversely affected, nor is any manual intervention required. Automatic recovery is not limited to just equal-cost links. Redundant links of unequal costs may also automatically recover from failures.

0 0

Post a comment