Optimum Network Utilization

Network utilization is a measurement of how much bandwidth is used during a specific time period. Utilization is commonly specified as a percentage of capacity. For example, a network-monitoring tool might state that network utilization on an Ethernet segment is 30 percent, meaning that 30 percent of the capacity is in use.

Network analysis tools use varying methods for measuring bandwidth usage and averaging the usage over elapsed time. Usage can be averaged every millisecond, every second, every minute, every hour, and so on. Some tools use a weighted average whereby more recent values are weighted more prominently than older values. Chapter 3 discusses measuring network utilization in more depth.

Your customer might have a network design goal for the maximum average network utilization allowed on a segment. Actually, this is a design constraint more than a design goal. The design constraint states that if utilization on a segment is more than a predefined threshold, then extra segments or bandwidth must be added.

For wide-area networks (WANs), optimum average network utilization is about 70 percent. A 70 percent threshold for average utilization means that peaks in network traffic can probably be handled without obvious performance degradation. Most WANs have less capacity than LANs, so more care is needed in selecting bandwidth that can cover actual, reasonable variations. Customers have many options for technologies that can reduce bandwidth utilization on WANs, including advanced routing-protocol features and compression. Optimizing bandwidth utilization is covered in more detail in Chapter 13, "Optimizing Your Network Design."

A typical "rule" for a shared, hub-based Ethernet LAN is that average utilization should not exceed 37 percent, because beyond this limit, the collision rate allegedly becomes excessive. This is not a hard-and-fast rule. The 37 percent number comes from the early 1980s when the Institute of Electrical and Electronics Engineers (IEEE) was still trying to decide which LAN technologies to standardize. Members of an IEEE subcommittee reported the 37 percent number to the IEEE 802 committee when describing traffic-handling studies they conducted to compare carrier sense multiple access collision detection (CSMA/CD) with token passing.

Token passing makes a node wait for a token before sending. According to the IEEE study, at modest loads, this wait means that token passing causes more delay (latency) than Ethernet. If more stations are added to a token ring, then the latency is even worse because the token must pass through each station.

However, at around 37 percent utilization on a medium shared by 50 stations, Ethernet frames experience more delay than token ring frames, because the rate of Ethernet collisions becomes significant. (The study used 128-byte frames and compared 10-Mbps Ethernet to 10-Mbps token passing. The results are only slightly different if 4-Mbps or 16-Mbps token ring is used.)

The key point of the IEEE study was that token passing extracts a higher toll for each station added. For 100 stations, Ethernet frames start experiencing more delay than token ring frames at 49 percent load, instead of the 37 percent load for 50 stations. Armed with this knowledge about the IEEE study, you can help your customer understand which, if any, maximum network-utilization goals for shared Ethernet are appropriate.

Consider the case of an Ethernet segment that is shared by only two stations: a client that sends requests and a server that responds after receiving requests. In this case, is it a problem if network utilization exceeds 37 percent? There are no collisions because the server and client never try to send at the same time, so the 37 percent rule, which is concerned with collisions, does not apply. The load should be almost 100 percent unless the client or server is slow.

NOTE

For an interesting discussion of how inappropriate it is to insist on a 37 percent rule for shared Ethernet utilization, refer to "Measured Capacity of an Ethernet: Myths and Reality," by David R. Boggs, Jeffrey C. Mogul, and Christopher A. Kent. This paper shows that "for a wide class of applications, Ethernet is capable of carrying its nominal bandwidth of useful traffic, and allocates bandwidth fairly." The paper is online at ftp://gatekeeper.research.compaq.com/pub/DEC/WRL/research-reports/WRL-TR-88.4.pdf.

If configured for full-duplex operations, a point-to-point Ethernet link supports simultaneous transmitting and receiving. However, bandwidth in both directions isn't used all the time, in most cases. You may hear vendors or even other network engineers proclaim that full-duplex Fast Ethernet means that there's 200 Mbps available and that full-duplex Gigabit Ethernet means that there's 2000 Mbps available, and that 100 percent of this bandwidth can be utilized. Although this is theoretically true, 100 percent utilization doesn't occur in most cases. Consider the client talking to a server again. The client sends requests and the server responds, in lock step. The client doesn't try to send at the same time as the server, so the bandwidth usage does not double on the client's link to the Ethernet switch.

A point-to-point link that connects a switch to a server or to another switch, on the other hand, could use all the bandwidth, depending on traffic patterns. If the utilization exceeds about 70 percent, then it's probably time to upgrade to more bandwidth. Network traffic is bursty. You should provision both LAN and WAN capacity with the assumption that the average utilization will be exceeded during bursts.

Full-duplex Ethernet is becoming the standard method for connecting servers, switches, and even end users' machines. It's an essential performance boost for servers, in particular. With full-duplex Ethernet, a switch can transmit the next client's request at the same time the server is sending a response to a previous request.

Upgrading from half to full duplex and replacing hubs with switches is a common technical goal for customers who haven't already made the change. Switches have some disadvantages, due to their complexity, and can make protocol analysis more difficult, but the performance advantages almost always outweigh the disadvantages. Moving to a switched full-duplex network is usually straightforward, although the autonegotiation of half versus full duplex has been notoriously buggy in the past, and servers have been known to crash when they are suddenly hit with many more transactions than had been possible before, when the rate was limited by medium contention. Nonetheless, switched versus hub-based Ethernet networks have become the de facto standard. Switches and full-duplex Ethernet are revisited in Chapter 3 and Chapter 10, "Selecting Technologies and Devices for Campus Networks."

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Responses

  • Generoso
    What is the maximum port utilization that an Ethernet circuit should not exceed?
    3 months ago
  • jussi
    What is average network utilization?
    3 months ago
  • Gioele
    Why can't hub networks reach 100 percent utilization?
    2 months ago

Post a comment