Q Tunneling

Cisco.com

Customer A VLANs 1 to 100

Customer A VLANs 1 to 100

Cisco.com

Customer A VLANs 1 to 100

Customer A VLANs 1 to 100

Customer B VLANS 1 to 200

© 2002, CiscoSystems, Inc. All rights reserved.

Trunk

Asymmetric Link

Customer B VLANS 1 to 200

Cisco CCIE Prep v1.0—ModuleS-2:

Customer B VLANS 1 to 200

© 2002, CiscoSystems, Inc. All rights reserved.

Trunk

Asymmetric Link

Customer B VLANS 1 to 200

Cisco CCIE Prep v1.0—ModuleS-2:

Business customers of service providers often have specific requirements for VLAN IDs and the number of VLANs to be supported. The VLAN ranges required by different customers in the same service-provider network might overlap, and traffic of customers through the infrastructure might be mixed. Assigning a unique range of VLAN IDs to each customer would restrict customer configurations and could easily exceed the VLAN limit of 4096 of the 802.1Q specification.

Using the 802.1Q tunneling feature, service providers can use a single VLAN to support customers who have multiple VLANs. Customer VLAN IDs are preserved and traffic from different customers is segregated within the service-provider infrastructure even when they appear to be on the same VLAN. The 802.1Q tunneling expands VLAN space by using a VLAN-in-VLAN hierarchy and tagging the tagged packets. A port configured to support 802.1Q tunneling is called a tunnel port. When you configure tunneling, you assign a tunnel port to a VLAN that is dedicated to tunneling. Each customer requires a separate VLAN, but that VLAN supports all of the customer's VLANs.

Customer traffic tagged in the normal way with appropriate VLAN IDs come from an 802.1Q trunk port on the customer device and into a tunnel port on the service-provider edge switch. The link between the customer device and the edge switch is an asymmetric link because one end is configured as an 802.1Q trunk port and the other end is configured as a tunnel port. You assign the tunnel port interface to an access VLAN ID unique to each customer.

Packets coming from the customer trunk port into the tunnel port on the serviceprovider edge switch are normally 802.1Q-tagged with appropriate VLAN ID. The tagged packets remain intact inside the switch and, when they exit the trunk port into the service provider network, are encapsulated with another layer of an 802.1Q tag (called the metro tag) that contains the VLAN ID unique to the customer. The original 802.1Q tag from the customer is preserved in the encapsulated packet. Therefore, packets entering the service-provider infrastructure are double-tagged, with the outer tag containing the customer's access VLAN ID, and the inner VLAN ID being the VLAN of the incoming traffic.

When the double-tagged packet enters another trunk port in a service provider core switch, the outer tag is stripped as the packet is processed inside the switch. When the packet exits another trunk port on the same core switch, the same metro tag is again added to the packet.

When the packet enters the trunk port of the service-provider egress switch, the outer tag is again stripped as the packet is processed internally on the switch. However, the metro tag is not added when it is sent out the tunnel port on the edge switch into the customer network, and the packet is sent as a normal 802.1Q-tagged frame to preserve the original VLAN numbers in the customer network.

3 550(config)# int fa0/5

3550(config-if)# switchport access vlan 3 3 55 0(config-if)# switchport mode dotlq-tunnel

3550(config-if)# exit 3 550(config)# int fa0/6

3550(config-if)# switchport access vlan 3 3 55 0(config-if)# switchport mode dotlq-tunnel

3550(config-if)# exit 3550(config)# vlan dotlq tag native

3 550(config)# int fa0/5

3550(config-if)# switchport access vlan 3 3 55 0(config-if)# switchport mode dotlq-tunnel

3550(config-if)# exit 3 550(config)# int fa0/6

3550(config-if)# switchport access vlan 3 3 55 0(config-if)# switchport mode dotlq-tunnel

3550(config-if)# exit 3550(config)# vlan dotlq tag native

© 2002, CiscoSystems, Inc. All rights reserved. Cisco CCIE Prep vl.O—Module 5-29

When you configure 802.1Q tunneling, you should always use asymmetrical links for traffic going in or out of a tunnel and dedicate one VLAN for each tunnel. You should also be aware of configuration requirements for native VLANs and maximum transmission units (MTU).

Native VLANs

When configuring 802.1Q tunneling on an edge switch, you must use 802.1Q trunk ports for sending out packets into the service-provider network. However, packets going through the core of the service-provider network might be carried through 802.1Q trunks, ISL trunks, or nontrunking links. When 802.1Q trunks are used in these core switches, the native VLANs of the 802.1Q trunks must not match any native VLAN of the nontrunking (tunneling) port on the same switch because traffic on the native VLAN would not be tagged on the 802.1Q transmitting trunk port.

These are some ways to solve this problem:

■ Use ISL trunks between core switches in the service-provider network. Although customer interfaces connected to edge switches must be 802.1Q trunks, we recommend using ISL trunks for connecting switches in the core layer.

■ Configure the edge switch so that all packets going out an 802.1Q trunk, including the native VLAN, are tagged by using the vlan dotlq tag native global configuration command. If the switch is configured to tag native VLAN packets on all 802.1Q trunks, the switch accepts untagged packets, but sends only tagged packets.

■ Ensure that the native VLAN ID on the edge switch trunk port is not within the customer VLAN range. For example, if the trunk port carries traffic of VLANs 100 to 200, assign the native VLAN a number outside that range.

System MTU

The default system MTU for traffic on the Catalyst 3550 switch is 1500 bytes. You can configure the switch to support frames larger than 1500 bytes by using the system mtu global configuration command. Because the 802.1Q tunneling feature increases the frame size by 4 bytes when the metro tag is added, you must configure all switches in the service-provider network to be able to process maximum frames by increasing the switch system MTU size to at least 1504 bytes. The maximum allowable system MTU for Catalyst 3550 Gigabit Ethernet switches is 2000 bytes; the maximum system MTU for Fast Ethernet switches is 1546 bytes.

Use the steps outlined in the following table to configure a switch port on the Catalyst 3550 as an 802.1Q tunnel port:

Table 4-14: Configure a Switch Port

Command

Purpose

3550(config)# interface interface-id

Enter interface configuration mode and the interface to be configured as a tunnel port. This should be the edge port in the service-provider network that connects to the customer switch. Valid interfaces include physical interfaces and port-channel logical interfaces (port channels 1 to 64).

3550(config-if)# switchport access vlan vlan-id

Specify the default VLAN, which is used if the interface stops trunking. This is VLAN ID specific to the particular customer.

3550(config-if)# switchport mode dotlq-tunnel

Set the interface as an 802.1Q tunnel port.

3550(config)# vlan dotlq tag native

(Optional) Set the switch to enable tagging of native VLAN packets on all 802.1Q trunk ports. When not set, if a customer VLAN ID is the same as the native VLAN, the trunk port does not apply a metro tag, and packets might be sent to the wrong destination.

Use the no switchport mode dotlq-tunnel interface configuration command to return the port

Use the no switchport mode dotlq-tunnel interface configuration command to return the port to the default state of dynamic desirable. Use the no vlan dotlq tag native global configuration command to disable tagging of native VLAN packets.

Although 802.1Q tunneling works well for Layer 2 packet switching, there are incompatibilities with some Layer 2 features and with Layer 3 switching.

■ A tunnel port cannot be a routed port.

■ IP routing is not supported on a VLAN that includes 802.1Q ports. Packets received from a tunnel port are forwarded based only on Layer 2 information. If routing is enabled on the switch virtual interface (SVI) that includes tunnel ports, untagged IP packets received from the tunnel port are recognized and routed by the switch. This allows the customer to access the internet through its native VLAN. If this access is not required, you should not configure SVIs on VLANs that include tunnel ports.

■ Fallback bridging is not supported on tunnel ports. Because all 802.1Q-tagged packets received from a tunnel port are treated as non-IP packets, if fallback bridging is enabled on VLANs that have tunnel ports configured, IP packets would be improperly bridged across VLANs. Therefore, you must not enable fallback bridging on VLANs with tunnel ports.

■ Tunnel ports do not support IP access control lists (ACLs).

■ Layer 3 quality of service (QoS) ACLs and other QoS features related to Layer 3 information are not supported on tunnel ports. MAC-based QoS is supported on tunnel ports.

■ EtherChannel port groups are compatible with tunnel ports as long as the 802.1Q configuration is consistent within an EtherChannel port group.

■ PAgP and UDLD are supported on 802.1Q tunnel ports.

■ Dynamic Trunking Protocol (DTP) is not compatible with 802.1Q tunneling because you must manually configure asymmetric links with tunnel ports and trunk ports.

■ Loopback detection is supported on 802.1Q tunnel ports.

■ When a port is configured as an 802.1Q tunnel port, spanning tree bridge protocol data unit (BPDU) filtering is automatically enabled on the interface. Cisco Discovery Protocol (CDP) is automatically disabled on the interface.

Was this article helpful?

0 0

Post a comment