Web Traffic Software
Oversubscription greatly increases the chance of resource contention at any point in time. If five OC-3s are coming into a router and one OC-12 is going out, there is a chance of buffering because of oversubscription. If you have a sustained incoming traffic rate higher than your outgoing traffic capacity, your buffers will eventually fill up, at which point you start dropping traffic.
This chapter introduces the concepts of virtual private networks (VPNs) and why they are used. I examine issues with sending traffic across public networks and what VPNs can do to protect this traffic. I introduce connection methods for VPNs, types of VPNs, things to consider when using VPNs, VPN components, VPN designs and issues, examples of VPN implementations, and some issues to consider when choosing a VPN implementation. Further chapters in this book expand on the topics introduced here.
By default, R5 uses the path through R4 because it offers the lowest metric (14,869,333). To set up unequal cost load balancing, assign a variance of 2 under the EIGRP process on R5. R5 multiplies the best metric of 14,869,333 q by 2, to get 29,738,666. R5 then uses all loop-free paths with a metric less than 29,738,666, which includes the path through R3. By default, R5 load q. balances over these paths, sending traffic along each path in proportion to its q metric.
Originally, the queuing mechanism on all interfaces was first-in, first-out (FIFO), meaning that the first packet to arrive for transmission would be the first packet transmitted, the fifth packet arriving wo uld be the fifth packets transmitted, and so on. This queuing mechanism works just fine if all of your traffic has no delay concerns (perhaps FTP or other batch transfer traffic). If you've ever worked with data-link switching (DLSw), however, you krr ow how sensitive that traffic is to delay in the network.
Whereas web traffic receives at least 128 kbps of bandwidth (if the web traffic needs that much bandwidth), voice traffic receives 256 kbps of priority bandwidth (if the voice traffic needs that much bandwidth), meaning the voice traffic is transmitted first, ahead of the web traffic. However, the voice traffic will not starve out the other traffic types, because the voice traffic is also limited to consuming no more than 256 kbps.
If the AS path is not manually manipulated by some administrative means, the path going over the fewest number of autonomous systems is selected by the router regardless of available bandwidth. However, if the AS that is attempting to influence the incoming traffic flow is sending out EBGP updates with a manipulated AS-path attribute over that undesired path, the receiver of this update is less likely to select it as the best because the AS path now appears to be longer.
IP spoofing is a technique used to gain unauthorized access to computers, whereby the intruder sends messages to a computer with an IP address indicating that the message is coming from a trusted host. To engage in IP spoofing, hackers must first use a variety of techniques to find an IP address of a trusted host and then modify their packet headers to appear as though packets are coming from that trusted host. Further, the attacker can engage other unsuspecting hosts to also generate traffic that appears as though it too is coming from the trusted host, thus flooding the network.
Atoken bucket is a formal definition of a rate of transfer. For this discussion, assume the token bucket starts full. This implies the maximum amount of tokens is available to sustain incoming traffic. Assume a bucket, which is being filled with tokens at a rate of x tokens per refresh interval. Each token represents 1 bit of data. To successfully transmit a packet, there must be c one-to-one match between bits and tokens. As a result, when a packet or frame arrives at the port or interface , and enough to kens exist in the bucket to accommodate the entire unit of data, the pacPet confCmit to t he contract, and therefope is forwarded. When the packet is successfully transmittedr th n n umber of tokens equal to the size of the transmitted packet is removed from the bucket. Figure 2-1 illustrates the token bucket mechanism.
Because configurations for security tend to restrict traffic flows, there is very little room for error when you are trying to ensure that good traffic passes and bad traffic doesn't (assuming you are able to correctly identify the bad traffic, which isn't always the case). To compound the matter, to maintain your security system, you must receive log messages from all of your security technologies. Without log files, you won't easily be able to tell whether things are working. The volume of these messages can be very burdensome as networks increase in size. Also, patches are released for various vulnerabilities, but the vulnerabilities don't magically disappear. You still must find a way to test and apply the patches to all of your systems.
Although the volume of Web traffic on the Internet is staggering, a large percentage of that traffic is redundant---multiple users at any given site request much of the same content. This means that a significant percentage of the WAN infrastructure carries the identical content (and identical requests for it) day after day. Eliminating a significant amount of recurring telecommunications charges offers an enormous savings opportunity for enterprise and service provider customers.
First-in, first-out (FIFO) queuing is the default queuing strategy that applies to all interfaces with more than 2 Mbps, or, in other words, E1 size or greater interfaces. With the FIFO Queuing strategy, packets are forwarded through the interface in the order that they are received. For example,Figure 6-1 shows three traffic conversations, or flows. Conversation A consists of Telnet packet that are approximately 64 bytes packets in conversation B are from a network application, axd range from 750 and 1020 bytes and packets from conversation C are HTTP web traffic packetsu which ame approximately 1500 bytes. When these three host stations send packets during pes riods of low netwoEk traffic utilization, all three conversations should be successful if these same three conversations take place during a period of high network utilization, however, packets from conversation C will be interspersed between the much smaller packets fro m t e A and B conversations, which could potentially...
CGFs offer more protection than CTP firewalls. Figure 2-17 shows the process that a person goes through when setting up a connection through a CGF. In Step 1, Richard attempts to set up a connection to the internal web server (188.8.131.52). The CGF intercepts the connection and authenticates it, if this has been configured. After authentication, the CGF opens a separate connection to the internal web server (Step 2). At this point, any web traffic sent by Richard to 184.108.40.206 first is processed by the CGF and then is redirected to the internal web server, as shown in Step 3. Any other traffic from Richard is dropped unless it has been authorized by the first authentication request or unless the CGF asks for authentication for any additional connections. If Richard does not authenticate successfully, the CGF terminates the connection.
CLIs require knowledge of the command set in the firewall product. For example, to configure NetFilter, use the IPTables CLI to allow inbound Secure Shell (SSH), e-mail, and web traffic (using TCP ports 22, 25, and 80, respectively) and deny all other traffic requires the configuration in Example 11-1.
Using the Web Cache Control Protocol, the Cisco IOS router routes requests for TCP port 80 (HTTP traffic) over a local subnet to the cache engine. The cache engine is dedicated solely to content management and delivery. Because only Web requests are routed to the cache engine, no other user traffic is affected by the caching process---Web caching is done off to the side. For non-Web traffic, the router functions entirely in its traditional role. 2. The router, running the Web Cache Control Protocol, intercepts TCP port 80 Web traffic and routes it to the cache engine. The client is not involved in this transaction, and no changes to the client or browser are required. Because the router redirects packets destined for Web servers to the cache engine, the cache engine operates transparently to clients. Clients do not need to configure their browsers to be in proxy server mode. This is a compelling feature for ISPs and large enterprises, for whom uniform client configuration is extremely...
With URL filtering enabled, the Cisco Security Appliance stops outbound HTTP, HTTPS, and FTP traffic until a URL-filtering server permits the connection. If the primary URL-filtering server and the secondary server do not respond, then outbound web traffic (port 80) stops until the URL-filtering server comes back online. However, the allow option causes the Cisco Security Appliance to forward HTTP traffic without filtering when the URL-filtering server(s) is unavailable.
If the intent is to use CAU to mark packets, as opposed to police, the drop keyword is not used. In set action is specified to modify the ToS field in the IP header, or set the local QoS group value for packet. The transmit keyword is yet another option, which allows a packet to be forwarded withou modifying any existing settings. Oce additional feature with CAU is the flexibility to configure multi Bate-limit statemeets on the same rnterfaCe. By u sing the continue keyword, independently or wi set action, packets ca4 be processed through multiple rate-limit statements until a match is found event a match is not found, the default action is to transmit. Therefore, in the absence of a match, packet is juCt forwarded. Example 9-8 demonstrates configuring CAU to mark traffic. In the examp ext ud ed ACL is configured specifying the traffic to be considered. In this instance, secure web traf being forwarded to the serial interface noted in the example. However, the traffic is not marked...
After this has been determined, access to websites that are within corporate policy can be enforced through help from the network using techniques including URL filtering. URL filtering essentially compares a website entered by an employee against a database of restricted websites. If the website is not in the list, access is granted. If it is in the list, typically the employee is given a warning screen that indicates he is trying to access a restricted web page. These lists can also be time-based in cases where a company doesn't really care what websites the employees visit it just doesn't want the web traffic to mess up the business traffic during peak work hours.
Description Block IP traffic from anyone to any internal subnet or host. 20 Permit TCP from guest user role to any equating port 80 or 443. Description Allow web traffic from clients in the guest user role to the Internet. 30 Deny IP from guest user role to any. Description Block everything else.
A TCP Map is used to configure the customized inspection and protection for TCP connections. A TCP Map can be configured as part of the HTTP inspection to provide TCP and HTTP protection for web traffic. A TCP Map can also be configured separately and independent of HTTP inspection.
To engage in IP spoofing, hackers must first use a variety of techniques to find an IP address of a trusted host and then modify their packet headers to appear as though packets are coming from that trusted host. Further, the attacker can engage other unsuspecting hosts to generate traffic that appears as though it too is coming from the trusted host, thus flooding the network.
Network simulation has at least two distinct realizations. The first models the network using software to emulate the traffic sources and sinks (drop offs), network devices, and the links that connect them. By varying model parameters, the designer can approximate the impact of more or less traffic demand or network resources. Although simulation software is expensive, for a large network it is far less expensive than building a flawed design. The second kind of simulation uses special hardware and software to generate traffic for injection into a live network for subsequent traffic analysis.
To verify the configuration, view the DLSw reachability on the border routers. You should see the static ICANREACH from the peers within the same group as the border peer. As mentioned previously, a Windows workstation is a great way to test DLSw. By putting workstations on the various LAN segments, you can generate traffic, create circuits, and force explorers. Example 13-74 shows a workstation connected to the DLSw domain, issuing an explorer for the NetBIOS station BORDER-PATROL.
Not all switches allow a port that is configured as the SPAN destination port to receive incoming traffic. Since the sensor's monitoring interface is usually a SPAN port on a Cisco switch, this presents a problem. If the switch does not enable the SPAN destination port to receive incoming traffic, the TCP RST packets will not be accepted, thus preventing the sensor from resetting the TCP connection. Therefore, if you are using a SPAN port to capture your network traffic and plan to use the TCP reset capability, you need to verify that your switch supports the capability to receive incoming traffic on the SPAN destination port.
On the other hand, the path for inbound traffic to reach your local networks, NetB and NetC, depends on how you advertise these networks. If you advertise NetC over the NY link only, incoming traffic toward NetC will take the NY link. Similarly, if you advertise NetB over the SF link only, traffic toward NetB will take the SF link. Although this scenario appears optimal for traffic entering the AS, there is no provision for redundancy for the two advertised networks.
So far, we have not specified anything about data plane policing or controlling incoming traffic mapped onto the TE tunnels. This is done by design because the MPLS TE specification does not define any data plane interaction and does not specify any bandwidth reservation. A data plane can be set up for quality of service (QoS) in the same way as DiffServ, whereas a control plane sets up the TE tunnels on links that meet bandwidth constraints. The available bandwidth information is useful in
In a typical service provider network, traffic policing is often implemented inbound at the provider edge (PE) router to rate-limit incoming traffic from the customer edge (CE) router to ensure that the customer traffic rate is not exceeding the contractual rate. Traffic shaping is often implemented outbound at the PE and at the CE to limit the traffic rate between the PE and CE and to allow for FRF. 12 fragmentation on Frame Relay connections between the PE and the CE.
In this example, the inspection rule OUTBOUND performs generic TCP, UDP, and ICMP traffic. The access list OUTSIDEACL is applied to the outside interface and blocks all incoming traffic except ICMP unreachable packet-too-big messages that support maximum transmission unit (MTU) path discovery. The access list INSIDEACL, applied to the inside interface in the inbound direction, permits all TCP, UDP and ICMP traffic initiated from the inside network. The inspection rule OUTBOUND, applied to the inside interface in the inbound direction, inspects the outbound packets and automatically allows the corresponding return traffic.
TIP Consider the following situation You do not want hackers exploiting port 80 to access your network. Because you do not host a web server, it is possible to block incoming traffic on port 80 except that your internal users need web access. When they request a web page, return traffic on port 80 must be allowed. The solution to this problem is to use the established command. The ACL will allow the response to enter your network, because it will have the ACK bit set as a result of the initial request from inside your network. Requests from the outside world will still be blocked because the ACK bit will not be set, but responses will be allowed through.
All of these factors are out of the customer's control. Customers who want to override these influences and control incoming traffic via one path or the other can do so by advertising their routes with different metrics. The provider will direct its traffic toward the customer based on the metric value. In Figure 7-9, the customer is advertising its routes with a metric of 50 toward NY and a metric of 100 toward SF. As such, traffic toward the customer will take the NY route.
Flow-based WRED classifies incoming traffic into flows based on parameters such as destination and source addresses and ports. Flow-based WRED uses this classification and state information to ensure that each flow does not consume more than its permitted share of the output buffer resources. Flow-based WRED determines which flows monopolize resources and more heavily penalizes these flows.
You must use other means such as BGP communities or ASpath prepending to achieve incoming link selection
Controlling the load distribution of incoming traffic over multiple links is more difficult in the multihomed scenario when links to multiple service providers are used. You cannot use the MED when the customer connects to multiple providers because the updates are sent to two different autonomous systems. Recall that the MED is used only when you compare routes that are received from a single directly connected AS over two parallel links. Therefore, route selection decisions will most likely use the AS-path attribute and prefer the route with the shortest AS-path length.
The debug frame-relay command is no longer a supported IOS command. However, if you use the verbose parameter with this command, the IOS starts Frame Relay debugging for incoming traffic (see Example 10-10). This command will then display debugging information about the packets that are being received on Frame Relay interface(s). The Cisco Documentation CD suggests that because the debug frame-relay command generates a lot of output, you should use it only when traffic on the Frame Relay network is less than 25 packets per second. Example 10-10 shows a sample output of the debug frame-relay verbose command. As you can see, each line of the output corresponds to one received packet, and each line has a corresponding DLCI number and a packet type that identifies the payload (see the Cisco Documentation CD for a listing of packet types). This command would be useful for verifying the flow of incoming traffic related to different upper layer protocols.
Congestion happens when the rate of input (incoming traffic switched) to an interface exceeds the rate of output (outgoing traffic) from an interface. Aggregation, speed mismatch, and confluence are three common causes of congestion. Queuing is a congestion management technique that entails creating a few queues, assigning packets to those queues, and scheduling departure of packets from those queues. Table 4-2 provides a comparative summary for the queuing disciplines discussed in this chapter.
For example, a video client wants to watch a 1.5-Mbps IP multicast-based video feed sent from a corporate video server. The video client sends an IGMP join message to the video server. The next-hop router for the client logs the IGMP join message. IP multicast traffic is transmitted downstream to the video client. The switch detects the incoming traffic and examines the destination MAC address to determine where the traffic should be forwarded. Because the destination MAC address is a multicast address and there are no entries in the switching table directing the traffic, the 1.5-Mbps video feed is simply sent to all ports.
A multi-layer switch makes switching and filtering decisions on the basis of OSI data link layer (Layer 2) and OSI network-layer (Layer 3) addresses. This type of switch dynamically decides whether to switch (Layer 2) or route (Layer 3) incoming traffic. A multi-layer LAN switch switches within a workgroup and routes between different workgroups.
When traffic is mapped to a TE tunnel, based on TE tunnel bandwidth, a policer can be set up to police the incoming traffic and ensure it does not exceed traffic contract (in this case, tunnel bandwidth). Queuing and weighted random early discard (WRED) can be enabled on the head end and mid point nodes so that marked packets get the needed per-hop behavior to ensure the correct delivery of traffic.
ISPs compare incoming traffic to the intrusion detection system (IDS) signatures and intrusion prevention system (IPS) signatures or both signatures in the ISP database. Signatures are characteristics of particular traffic patterns. A signature such as several packets to different destination ports from the same source address within a short period of time might detect port scans. Another such signature could be SYN to a nonlistening port.
Incoming traffic to the customer is controlled by using either AS-path prepending or the multi-exit discriminator (MED). Because the customer has multiple connections to the same AS, the MED is the ideal attribute to use. When the customer announces its routes to the ISP, a bad (high) MED value on the backup link and a good (low) value on the primary link are set. Incoming traffic to the customer is directed to the primary link by using the MED. In the primary edge router of the customer, all routes that are sent to the ISP have their MED attribute set to the value 1000 by the route-map LowMED out. In the backup edge router of the customer, all routes that are sent to the ISP have their MED attribute set to the value 2000 by the route-map HiMED out. Because the ISP receives the routes with all other attributes set to the same values, the MED values direct traffic for the customer to the primary link.
The basic premise of this overview is that host devices attempting access are challenged for valid credentials before they are allowed network connectivity. After it's authenticated and authorized, the Layer 2 switch inspects incoming traffic from the user on the authenticated authorized port and filters frames, allowing only those with the authenticated MAC address. Although 802.1X is a highly recommended and essential component for 802.1AE, it alone cannot address unauthorized access to or prevent the tampering of information traversing our networks.
BGP weight should be used in Routers A and B for all the prefixes being advertised from Router C. This is necessary to provide a safeguard against AS path prepending. It is normal practice for multihomed customers to use the AS path-prepending technique to affect the balance of the incoming traffic flows. In some cases the prepending of ASNs would break the uRPF. For example, the downstream customer prepends enough ASNs to its advertisements to Router A that Router A's best path to Router C would be through Router B. This means that the Router A-C forwarding path actually would select a Router A-B-C forwarding path. uRPF would not have a valid path for source addresses coming up the Router C-A link, effectively blocking the downstream customer's outbound traffic on the Router C-A link. A BGP weight (see Example 4-5) applied on Routers A and B would override the local effects of AS path prepends.
It checks to see if an SA already exists for that peer using the desired security services. If it finds an existing SA, it places the SPI of the SA into the IPSec header and sends the packet. The destination peer takes the SPI, combines it with the IPSec protocol and the destination IP address (itself), and locates the existing SA in the Security Association Database it maintains for incoming traffic on that interface. Once it finds the SA, the destination peer knows how to unwrap the data for use.
A common requirement for filtering is to permit IP traffic for sessions originating from within your network but to deny IP traffic for sessions originating from outside your network. Using basic extended access lists, you can approximate session filtering by using the established keyword with the permit command. The established keyword filters TCP packets based on whether the ACK or RST bits are set. This method of using the established keyword is available only for the TCP upper-layer protocol. For the other upper-layer protocols (such as UDP, ICMP, and so forth), you would have to either permit all incoming traffic or define all possible permissible source destination host port address pairs for each protocol.
This second model is much more secure because it offers multiple levels of security to the corporation. The exterior screening router acts as a first-level filter to permit or deny traffic coming in from the Internet to the internal campus. It validates most incoming traffic before passing it on to the firewall. The firewall then provides the more CPU-intensive function of packet-by-packet inspection. In this scenario, it is also effective to include an active audit device that includes network traffic monitoring and intrusion detection on the network segment connecting the firewall to the exterior router. This device can verify adherence to the corporate security policy and can pin-point and isolate any attacks from the Internet to the corporate network---or any attacks instigated from your internal network out to the Internet.
Inbound traffic still adheres to the minimalist filtering policy, blocking all traffic to all ports unless you otherwise configure the router to permit the traffic. Unfortunately, filtering incoming traffic can only be done based on the destination port number, so it is not possible to permit only certain external hosts to access the protected resources. Either the entire Internet can access the resources or none of the Internet can.
One device in the virtual cluster, the virtual cluster master, directs incoming traffic to the other devices, called secondary devices. The virtual cluster master monitors all devices in the cluster, keeps track of how busy each is, and distributes the session load accordingly. The role of virtual cluster master is not
The AutoQoS VoIP commands should not be applied to an interface where QoS commands have previously been configured. However, after you enable AutoQoS on an interface, you can fine-tune and modify the AutoQoS-generated configuration commands if necessary. The QoS markings of the incoming traffic are honored (trusted) on an interface in two cases. The first case is when the auto qos voip trust command is applied to an interface. The second case is when a Cisco IP phone is attached to the switch port, and the auto qos voip cisco-phone is applied to the interface. If a Cisco IP phone is disconnected from such a port and a workstation is connected to the port directly, the switch discovers the departure of the Cisco IP phone (using CDP version 2), and it changes its behavior to no trust on that port. The egress queuing and buffer allocation on a port are determined automatically based on the interface type AutoQoS VoIP generates optimal priority queuing (PQ) and weighted round-robin (WRR)...
(NAT) and Port Address Translation (PAT) are mechanisms for conserving registered IP addresses in large networks, and they also simplify IP addressing tasks. NAT and PAT translate IP addresses within private internal networks to legal IP addresses for transport over public external networks such as the Internet without requiring a registered subnet address. Incoming traffic is translated for delivery within the inside network.
Congestion happens when the rate of input (incoming traffic switched) to an interface exceeds the rate of output (outgoing traffic) from an interface. Why would this happen Sometimes traffic enters a device from a high-speed interface and it has to depart from a lower-speed interface this can cause congestion on the egress lower-speed interface, and it is referred to as the speed mismatch problem. If traffic from many interfaces aggregates into a single interface that does not have enough capacity, congestion is likely this is called the aggregation problem. Finally, if joining of multiple traffic streams causes congestion on an interface, it is referred to as the confluence problem.
Tmstats_ascii has enough information to do per-exit-point accounting. This gives you your traffic matrix you know the amount of traffic destined for a particular nonrecursive prefix and the amount of time in which that traffic has arrived, and you can easily compute the traffic rate. In the preceding example, with 698 packets heading toward 192.168.1.7 in the last 444745 milliseconds, this means that the average traffic rate is about 1.5 pps. This isn't the busiest network in the world, but it's good enough to serve as an example here.
Does your traffic need to be prioritized on your WAN circuits If yes, select a method of classifying your traffic flows. IP Precedence is recommended for weighting (usually backbone networks). 4. Do you have a hub-and-spoke Frame Relay network or another need for shaping your traffic flows
Usage Guidelines You can compress the headers of your TCP IP packets in order to reduce the size of your packets. TCP header compression is supported on serial lines using Frame Relay, HDLC or Point-to-Point (PPP) encapsulation. You must enable compression on both ends of a serial connection. RFC 1144 specifies the compression process. Compressing the TCP header can speed up Telnet connections dramatically. In general, TCP header compression is advantageous when your traffic consists of many small packets, not for traffic that consists of large packets. Transaction processing (usually using terminals) tends to use small packets while file transfers use large packets. This feature only compresses the TCP header, so it has no effect on UDP packets or other protocol headers.
Enabling forwarding adjacency changes not only the traffic patterns for traffic from the TE headend router, but it also influences the path decision other routers make, which can change your traffic flow. FA can solve some problems, but it can create others if you're not careful.
Louis Denver link shown in Figure 9-1 again. If the load on this link increases to 900 Mbps for a significant amount of time, it's clear that you want to push that traffic somewhere else, because it's an OC-12 link, which has an IP capacity of about only 600 Mbps. So you're dropping 33 percent of your traffic, which is a bad thing.
LATENCY Network latency is hard to control when an external party controls the intermediary network. Not knowing or being able to control the traffic and links with which your traffic will be intermixed renders most packet-based latency controls ineffective. The best options are to limit the amount of high-latency effects caused by your own traffic. This can be done by lowering the IP MTU for links connecting to an IP VPN, or by using FRF.12 if the intermediary network offers a frame-relay connection.
Note that tmasinfo has no traffic counters its job is to present the BGP portion of the routing table should you care to examine it. The highlighted lines in Example 10-8 show the tmasinfo entries for 220.127.116.11 8 and 18.104.22.168 8, which are the prefixes that gsr2 is sending traffic toward. The three columns in each of the highlighted lines are the BGP next hop, the next hop AS for that route, and the route itself. These columns are more fully explained in the next section.
To demonstrate and measure the behavior of sharing on transmit queues, two packet-generator pc connected to the switch as shown in Figure 7-4. The packet-generator port connected to Gigabit Et was sending traffic with a DSCP value of zero at 1.0 Gbps. The packet-generator port connected to Ethernet 1 2 was sending traffic with a DSCP value of 40 at 1.0 Gbps. The traffic sent by both inter intended for anot hie r traffic port connected to interface Gigabit Ethernet 5 1. Connected to interface Ethernet 5 1 was a traffic-generator port measuring the traffic rate for each DSCP flow. Three trial cond ucted. The first trial involved the default configuration of bandwidth, whereas the remaining tr applied the bandwidt h parameter at 200 MbpE, 750 Mbps, and 900 Mbps, respectively. Table 7-5 s the results of the t rial. Example 7-25 displays the basic configuration used for the trial.
A number of situations exist in which an AS will send out an aggregate as well as its more-specific routes. This usually occurs in situations where the customer is multihomed to a single provider. The provider would use the more-specific routes to make better decisions when sending traffic toward the customer. At the same time, the provider can propagate the aggregate only toward the NAP to minimize the number of routes propagated to the Internet. This is illustrated in Figure 6-30.
So that you have a better understanding of a man-in-the-middle attack, I'll use Figure 2-5 to illustrate how this attack occurs. In this example, PeerA wants to send data to PeerB. PeerA does a DNS lookup for PeerB's address, shown in Step 1. However, the attacker also sees the DNS request and sends a reply back to PeerA before the DNS server has a chance, shown in Steps 2 and 3. The IP address that the attacker sends is the attacker's own IP address. PeerA knows no better and assumes that when it uses the IP address in the DNS reply that it is sending traffic to PeerB however, as shown in Step 4, the traffic actually is directed to the attacker.
There is something interesting about the last entry (*, 22.214.171.124), which is the Data-MDT for EuPoBank. No C flag is present, which indicates that no mVRF is connected. This is because the Paris PE router is sending traffic to this tunnel from its connected source the PE router is not receiving traffic from the tunnel. The Paris PE router is the root of the (*, 126.96.36.199) entry only.
Step 2 Activate the autodiscovery phase using the auto-discovery qos command. For the most accurate traffic analysis, leave the autodiscovery phase running for as long as possible, preferably several days. In the autodiscovery phase, the generated policy can optionally be reviewed using show auto discovery qos.
To determine which path to manipulate, the administrator performs a traffic analysis on Internet-bound traffic by examining the most heavily visited addresses, web pages, or domain names. This information can usually be found by examining network management records or firewall accounting information.
The PIX in this scenario is set up with a nat (inside) 0 0 0 command, which implies that PIX will not translate any of the IP addresses on the inside network. Due to how the nat 0 command is designed, the PIX starts to proxy ARP on the outside interface for any and all addresses upon being configured in this manner. Consequently, if host A wants to access the Internet and sends an ARP and the PIX proxy ARPs before the router can respond, host A would be sending traffic destined for the Internet to the PIX, which is incorrect routing. The way around this scenario is to turn off proxy ARP on the PIX. However, after you do this, the PIX stops ARPing for all addresses, even the ones for which it should ARP. Now the router needs to be set up with routes for the inside network behind the PIX so that it does not have to rely on the PIX proxy ARPing for that network. This situation would never have arisen if the nat 0 command were set up with specific IP address ranges contained on the inside...
Method of distributing Web traffic by taking into account Web server availability and relative client-to-server topological distances in order to determine the optimal Web server for a client. DistributedDirector uses the Director Response Protocol to query DRP server agents for BGP and
Tools used for traffic analysis range from manual identification of applications using Cisco IOS software commands to those in which dedicated software- or hardware-based analyzers capture live packets or use the Simple Network Management Protocol (SNMP) to gather interface information. Analysis tools include the following Remote monitoring probes can also be used to support traffic analysis. The following sections include examples of some of these tools.
Another more complex goal is to guarantee that incoming traffic from the Internet destined for European enterprise sites uses the Paris router and incoming traffic for North American enterprise sites uses the New York router. This goal requires the enterprise routers to advertise to the Internet routes to enterprise sites. The routes must include metrics so that routers on the Internet know the preferred path to sites on the enterprise intranet.
These signatures are designed to detect illegal MHTML URLs in a monitored connection. The signature with a SubSignature ID of 0 examines web traffic (to port 80), and the signature with a SubSignature ID of 1 examines e-mail traffic (to port 25). Assume that you treat the fidelity rating as a percentage indicating the likelihood that the signature detected the traffic that it is designed to identify (not a false positive).
Cisco IOS PBR allows you to classify traffic based on extended access list criteria, set IP precedence bits, and even route to specific traffic-engineered paths that may be required to allow a specific QoS through the network. By setting precedence levels on incoming traffic and using them in combination with the queuing tools described earlier in this chapter, you can create differentiated service. These tools provide powerful, simple, and flexible options for implementing QoS policies in your network.
Chapter 2 emphasizes that at an AS border, outgoing route advertisements affect incoming traffic, and incoming route advertisements affect outgoing traffic. As a result, outgoing and incoming advertisements should be considered separately. This section begins the discussion of BGP route advertisements by examining basic methods of injecting routes into BGP.
The only feature of GTS not already covered to some depth, other than configuration, is the concept of shaping a subset of the traffic on an interface or subinterface. GTS can classify traffic with an ACL traffic permitted by the ACL is shaped based on the parameters specified on the same command. For instance, you could shape all FTP traffic to 32 kbps, and web traffic to 64 kbps.
A hacker can select port 443 as a listening port and remain undetected. The hacker can then set up a port redirector without disrupting operations. A port redirector takes traffic coming in on one port and directs it to another host on another port. In this example, the port redirector on the web server takes incoming traffic on port 443 and sends it out to port 3389 on the database server.
You can directly apply the CBTS feature on TE tunnels with a PE router as the head end router and the incoming traffic from a virtual routing forwarding (VRF) interface. You can map each EXP bits value to a particular TE tunnel. Three bits for the EXP bits give you eight levels of QoS, so you can even use eight TE tunnels between a pair of LSRs and map each EXP bits value to a different TE tunnel. The command to assign CBTS to a TE tunnel is as follows
10 Permit TCP from any to Link-based remediation resources equaling 80 or 443. Description Allow web traffic to the appropriate remediation resources. 20 Permit TCP from any to CAM for file-based remediation equaling 80 or 443. Description Allow web traffic to the CAM for remediation file distribution. Description Allow web traffic to the appropriate remediation resources. Description Allow web traffic to the CAM for remediation file distribution. 20 Permit TCP from guest user role to any equaling port 80 or 443. Description Allow web traffic from clients in the guest user role to the Internet.
Employees connected to VLAN_C on R5 don't need web access while at work. Block web traffic from Monday through Friday between the hours of 7 00 a.m. and 5 00 p.m. R5(config) interface fastethernet 0 1 R5(config-if) ip access-group web-traffic in Example 8-76 configures and defines the extended access list named web-traffic. R5(config) ip access-list extended web-traffic
The next example, Example 6-34, shows how a two-bucket traffic policy sets the ToS bits for differ traffic type and burst size. Example 6-34 shows how class Servers sets the traffic policy for all trafl network 209.145.63 2 Class apps specifies all traffic using the Telnet, SMTP protocols, or passi 188.8.131.52n and class web specifies HTTP web traffic. In the example traffic belonging to the Ser conforms to the average bit rate of 4 Mbps with a 750,000-byte normal burst and 1,500,000-byte, have its DSCP valu e cluanged to cs2. Traffic exceeding the normal burst will have its DSCP value ch any Seovers traffic that violates the excess burst rate will be transmitted without a DSCP value cha specifies that traffic going to the servers on the 184.108.40.206 27 network will have a 3-Mbps avera 562,500-byte normal burst, and a 1,125,000-byte extended burst. Traffic conforming to the apps p DSCP value set to cs3, traffic exceeding the normal burst will have its DSCP changed to cs4, and tr...
Step 3 Traffic analysis If possible, use traffic analysis to provide information about the applications and protocols used and to reveal any shortcomings in the network. NOTE Although traffic analysis is a good idea in principle, it is often too costly in terms of time and effort to do in practice.
Tail drop occurs when a packet needs to be added to a queue, but the queue is full. Yes, tail drop is indeed that simple. However, tail drop results in some interesting behavior in real networks, particularly when most traffic is TCP based, but with some UDP traffic. Of course, the Internet today delivers mostly TCP traffic, because web traffic uses HTTP, and HTTP uses TCP.
Two Internet scalability challenges are the depletion of registered IP version 4 (IPv4) address space and scaling in routing. Cisco IOS Network Address Translation (NAT) and Port Address Translation (PAT) are mechanisms for conserving registered IPv4 addresses in large networks and simplifying IPv4 address management tasks. NAT and PAT translate IPv4 addresses within private internal networks to legal IPv4 addresses for transport over public external networks, such as the Internet, without requiring a registered subnet address. Incoming traffic is translated back for delivery within the inside network.
An important principle to remember when working with inter-AS traffic is that each physical link actually represents two logical links one for incoming traffic and one for outgoing traffic (see Figure 2-11). The routes you advertise in each direction influence the traffic separately. Avi Freedman, who has written many excellent articles on ISP issues, calls a route advertisement a promise to carry packets to the address space represented in the route. In Figure 2-10, the subscriber's router is advertising a default route into the local AS a promise to deliver packets to any destination for which there is not a more-specific route. And the ISP's router, advertising a route to 220.127.116.11 20, is promising to deliver traffic to the subscriber's AS. The outgoing traffic from the subscriber's AS is the result of the default route, and the incoming traffic to the subscriber's AS is the result of the route advertised by the ISP's router. This concept might seem somewhat trivial and obvious...
The LOCAL_PREF attribute affects only traffic leaving the AS. To influence incoming traffic, the MULTI_EXIT_DISC attribute, known as the MED for short, is used. This optional nontransitive attribute is carried in EBGP updates and allows an AS to inform another AS of its preferred ingress points. If all else is equal, an AS receiving multiple routes to the same destination compare the MEDs of the routes. Unlike LOCAL_PREF, in which the largest value is preferred, the lowest MED value is preferred. This is because MED is considered a metric, and with a metric the lowest value the lowest distance is preferred. Figure 2-27 shows how you can use the MED. Here, a subscriber is dual-homed to a single ISP AS 525 prefers that its incoming traffic use the DS-3 link, with the DS-1 link used only for backup. The MED in the updates passing across the DS-3 link is set to 0 (the default), and the MED in the updates passing across the DS-1 link is set to 100. If nothing else differs in the two...
The previous scenarios send the more-specific routes of AS 100 to AS 200 so that AS 200 can implement routing policy. That is, AS 200 uses the routes to set routing preferences for sending traffic to AS 100. AS 100 also can influence its incoming traffic by manipulating its outgoing advertisements. For example, advertising 192.168.193.0 24 over the Stowe Sugarbush link and not over the Mammoth Diamond link causes incoming traffic to use the Stowe Sugarbush link. An administrator might want to implement such a policy if the AS is geographically diverse. For instance, Stowe might be in Vermont and Mammoth in California. The administrator might want incoming traffic to use the ingress point closest to the destination, to minimize internal routing. An aggregate route is advertised over both links for backup so that if either link fails, all incoming traffic is routed to the remaining link.
Configures the interface to classify incoming traffic packets according to the class of service (CoS) value. For untagged packets, the default CoS value is used. The default port CoS value is 0. Configures the interface to classify incoming traffic packets according to the differentiated services code point (DSCP) value. For a non-IP packet, the packet CoS value is used if the packet is tagged. For an untagged packet, the default port CoS value is used.
In either case, IBNS operates at the edge of the network on access ports. When a device is plugged into the network, Layers 1 and 2 are established, but before access to the network is permitted, the access switch challenges the device to provide its identity credentials. The credentials provided by the device are passed through the network to authentication servers in the data center and are validated. Validated devices are granted access, and unauthorized devices are prevented from sending traffic to the network.
3 Networks 192.168.1.0, 192.168.2.0, 192.168.3.0, 192.168.4.0, and 192.168.5.0 exisi within AS 2. The administrator of this AS wants the neighboring AS to prefer R5 when sending traffic to 192.168.1.0 and 192.168.3.0. The neighboring AS should prefer R6 when sending traffic to 192.168.2.0 and 192.168.4.0. In each case, the less-preferred link serves as a backup to the more-preferred link. 192.168.5.0 is a private network and must not be advertised to any EBGP peer. Modify the configurations written in Exercise 2 to implement this policy.
By definition, IDS and IPS solutions incorporate signatures that trigger based on information that is located throughout the packet. Inline deep-packet inspection refers to the ability to perform actual protocol analysis on network traffic. Many applications (including malicious programs) attempt to use open ports to pass information through access control lists on your network. Using inline deeppacket inspection enables you to enforce your security policy beyond basic port numbers. For instance, this functionality enables you to prevent attackers (and applications) from sending traffic to or from port 80 unless the traffic is legitimate HTTP traffic.
When using pure packet filtering, the designer of the ruleset must exactly know, how the supported applications look like on the network, and design rulesets to permit outgoing and incoming traffic. In practice, the most difficult aspect of rule definition is the proper filtering of return (server-to-client) traffic, which often cannot be filtered securely (usually, when incoming sessions to random client ports are needed, such as with FTP).
Network-based IPS and host-based IPS (HIPS) can usually notify you when a reconnaissance attack is under way. This warning allows you to better prepare for the coming attack or to notify the Internet service provider (ISP) that is hosting the system launching the reconnaissance probe. ISPs compare incoming traffic to the intrusion detection system (IDS) or the IPS signatures in their database. Signatures are characteristics of particular traffic patterns. A signature, such as several packets to different destination ports from the same source address within a short period of time, can be used to detect port scans. Another such signature could be SYN to a non-listening port.
A service provider can offer a VPN customer the pipe model, hose model, or a combination of both. The PE routers at the ingress determine which traffic receives a particular CoS, depending on the incoming interface, IP source and destination addresses, IP precedence, TCP port numbers, or a combination of these values. The ingress PE routers can also police incoming traffic and mark packets that are out-of-rate, based on the SLA drawn with the customer. These packets can be marked differently and dropped in case of congestion.
Another shortcoming of CIDR is that multihoming to different service providers becomes more difficult. Figure 4-5 recaps the problem as discussed in Chapter 2. A subscriber is multihomed to ISP1 and ISP2 and has a CIDR block that is a subset of ISPl's block. To establish correct communication with the Internet, both ISP1 and ISP2 must advertise the subscriber's specific address space of 18.104.22.168 23. If ISP2 does not advertise this address, all the subscriber's incoming traffic passes through ISP1. And if ISP2 advertises 22.214.171.124 23, whereas ISP1 advertises only its own CIDR block, all the subscriber's incoming traffic matches the more-specific route and passes through ISP2. This poses several problems
Controlling the load distribution of incoming traffic over multiple links is more difficult in the multihomed scenario when links to multiple service providers are used. You cannot use the multi-exit discriminator (MED) when the customer connects to multiple providers because the updates are sent to two different autonomous systems. Recall that the MED is used only when you compare routes that are received from a single directly connected AS over two parallel links. Therefore, route selection decisions will most likely use the AS-path attribute and prefer the route with the shortest AS-path length.
Attackers can attempt to launch an attack by sending gratuitous ARP (GARP) replies. These GARP messages can tell network devices that the attacker's MAC address corresponds to specific IP addresses. For example, the attacker might be able to convince a PC that the attacker's MAC address is the MAC address of the PC's default gateway. As a result, the PC starts sending traffic to the attacker. The attacker captures the traffic and then forwards the traffic to the appropriate default gateway.
In Figure B.10, the TCP, UDP, and ICMP access lists given earlier as examples are used as filters. Access list 110, from the previous two examples, has been applied to the Token Ring 0 interface to check incoming traffic. Access list 111 is applied to the same interface to check outgoing traffic. Analyze the two access lists carefully, including their interrelationship, and consider the following
The default-information originate command is used with level 2 routers for sending traffic to destinations not found in the local routing table. This command is used to send a default route in the backbone, and it creates an external entry into the L2 LSP. Unlike OSPF, this command does not require a default route to be present in the router that is originating the default route.
Remember that the incoming traffic flow (from the perspective of AS 213) will be a result of the route selection for outgoing traffic in AS 387. The traffic that is going out from AS 387 will end up as incoming traffic in AS 213. If AS 387 configures some changes that cause the route selection process for outgoing traffic to prefer to reach network 10.0.0.0 8 via AS 462, the changes would result in behavior matching the desired administrative policy for AS 213, which specifies that incoming traffic to the AS should be received over the high-speed link.
The fundamental problem that service providers face today when offering native multicast services to end customens is the amount of multicast distribution information (that is S, G or *, G states) that needs to be maintained to provide the most optimal multicast traffic distribut ion. When a mul ticast s ource becomes active within a particular customer site, the multicast eraffic mupt Oravel through the service provider network to reach all PE routers that have receivers connected to CE routers for that multicast group. To prevent unnecessary traffic delivery, the service provider must avoid sending traffic to PE routers that have no interested receivers. To accomplish this goal and achieve optimal routing, each P router in the network must maintain state information for all active customer distribution trees.
In the screen shot on the left in the figure, the current configuration of interfaces (fe0 0 and fe0 1) protects the network from incoming traffic. The window at the bottom of the screen displays the ACL rules applied to that traffic flow. You can select and edit these rules as required.
HTTP or web traffic is one of the most popular types of traffic on networks today. ASA includes the ability to inspect HTTP traffic flows to detect possible network attacks. You can initiate the process to configure the inspection of an HTTP traffic flow under the Service Policy Rules section. This process to initiate the creation of a traffic flow for HTTP inspection is similar to the process to define a traffic flow with Service Policy Rules for IPS inspection as described in the Intrusion Prevention Service section earlier in this chapter. The configuration to inspect a certain HTTP traffic flow results in a class-map and policymap statement, similar to the CLI output for the IPS Service Policy Rule configuration.
As you learned earlier, MPLS uses RSVP messages to build traffic engineering tunnels. In the point-to-multipoint case, you have multiple tail ends and a single head end for the traffic-engineered tunnel. To build a point-to-multipoint tunnel, the head end must send PATH messages to all the tail end nodes with the same tunnel ID. The PATH messages are received by the tail ends, and the tail end nodes respond with a RESV message. While the RESV message travels back toward the head end node, along the path, each core node performs admission control and merges the LSP upstream with the matching tunnel indicated by the tunnel ID. The merged reservation continues upstream toward the head end node, thus creating a multipoint tree from the head end to the tail ends. (See Figure 8-7.)
In multicast forwarding, the source is sending traffic to an arbitrary group of hosts that is represented by a multicast group address. The multicast router must determine which direction is the upstream direction (toward the source) and which one is the downstream direction (or directions). If there are multiple downstream paths, the router replicates the packet and forwards it down the appropriate downstream paths (best unicast route metric), which is not necessarily all paths.
A new class map type called port-filter was created for the port-filter feature. You must first create one or more port-filter class maps before you can create your port-filter service policy. Your port-filter class maps will separate your traffic into classes of traffic. Then, your service policy will define actions on this traffic.
Because the ACL is stateless, the router has no idea whether a persistent SSH session is in place. This leads to the principal limitation of basic ACLs all a stateless ACL knows is to match incoming traffic against the ACLs applied to an interface. For example, even if there were no SSH session to 10.2.3.4 from network 10.1.1.0 24, host 10.2.3.4 could send traffic to the 10.1.1.0 24 network provided the source port is 22. The established flag on the ACL adds an additional requirement that the acknowledgment (ACK) or reset (RST) bit is set in the TCP header.
Official Download Page Traffic Ivy
The best part is you do not have to wait for Traffic Ivy to come in the mail, or drive to a store to get it. You can download it to your computer right now for only $19.97.