Выбрать главу

In this example, each connection type could be split because there were enough physical network adapters to meet our needs. Let's look at another example where an ESX Server host has only two network adapters to work with. Figure 3.18 shows a network environment with five IP subnets: management, production, test, IP storage, and VMotion, where the production, test, and IP storage networks are configured as vLANs on the same physical network. Figure 3.18 displays a virtual network architecture that includes the use of vLANs and that combines multiple connection types into a single vSwitch.

Figure 3.17 Without the use of port groups and vLANs in the vSwitches, each IP subnet will require a separate vSwitch with the appropriate connection type. 

Figure 3.18 With a limited number of physical network adapters available in an ESX Server host, vSwitches will need multiple connection types to support the network architecture.

The vSwitch and connection type architecture of ESX Server, though robust and customizable, is subject to all of the following limits:

♦ An ESX Server host cannot have more than 4,096 ports.

♦ An ESX Server host cannot have more than 1,016 ports per vSwitch.

♦ An ESX Server host cannot have more than 127 vSwitches.

♦ An ESX Server host cannot have more than 512 virtual switch port groups.

Virtual Switch Configurations… Don't Go Too Big!

Although a vSwitch can be created with a maximum of 1,016 ports (really 1,024), it is not recommended if growth is anticipated. Because ESX Server hosts cannot have more than 4,096 ports (1,024×4), if vSwitches are created with 1,016 ports then only four vSwitches would be possible. Virtual switches should be created with just enough ports to cover existing needs and projected growth. 

By default, all virtual network adapters connected to a vSwitch have access to the full amount of bandwidth on the physical network adapter with which the vSwitch is associated. In other words, if a vSwitch is assigned a 1Gbps network adapter, then each virtual machine configured to use the vSwitch has access to 1Gbps of bandwidth. Naturally, if contention becomes a bottleneck hindering virtual machine performance, a NIC team would be the best option. However, as a complement to the introduction of a NIC team, it is also possible to enable and to configure traffic shaping. Traffic shaping involves the establishment of hard-coded limits for a peak bandwidth, average bandwidth, and burst size to reduce a virtual machine's outbound bandwidth capability.

As shown in Figure 3.19, the peak bandwidth value and the average bandwidth value are specified in Kbps, and the burst size is configured in units of KB. The value entered for the average bandwidth dictates the data transfer per second across the virtual vSwitch. The peak bandwidth value identifies the maximum amount of bandwidth a vSwitch can pass without dropping packets. Finally, the burst size defines the maximum amount of data included in a burst. The burst size is a calculation of bandwidth multiplied by time. During periods of high utilization, if a burst exceeds the configured value packets will be dropped in favor of other traffic; however, if the queue for network traffic processing is not full, the packets will be retained for transmission at a later time.

Traffic Shaping as a Last Resort

Use the traffic shaping feature sparingly. Traffic shaping should be reserved for situations where virtual machines are competing for bandwidth and the opportunity to add network adapters is removed by limitations in the expansion slots on the physical chassis. With the low cost of network adapters, it is more worthwhile to spend time building vSwitch devices with NIC teams as opposed to cutting the bandwidth available to a set of virtual machines. 

Perform the following steps to configure traffic shaping: 

1. Use the VI Client to establish a connection to a VirtualCenter server or an ESX Server host.

2. Click the hostname in the inventory panel on the left, select the Configuration tab from the details pane on the right, and then select Networking from the Hardware menu list. 

3. Click the Properties for the virtual switch, select the name of the virtual switch or port group from the Configuration list, and then click the Edit button.

4. Select the Traffic Shaping tab.

5. Select the Enabled option from the Status drop-down list.

6. Adjust the Average Bandwidth value to the desired number of Kbps.

7. Adjust the Peak Bandwidth value to the desired number of Kbps.

8. Adjust the Burst Size value to the desired number of KB.

Figure 3.19 Traffic shaping reduces the outbound bandwidth available to a port group.

With all the flexibility provided by the different virtual networking components, you can be assured that whatever the physical network configuration may hold in store, there will be several ways to integrate the virtual networking. What you configure today may change as the infrastructure changes or as the hardware changes. Ultimately the tools provided by ESX Server are enough to ensure a successful communication scheme between the virtual and physical networks.

Creating and Managing NIC Teams

In the previous section, we looked at some good examples of virtual network architectures needed to support the physical networking components. Now that you have some design and configuration basics under your belt, let's move on to extending the virtual networking beyond just establishing communication. A NIC team can support any of the connection types discussed in the previous section. Using NIC teams provides redundancy and load balancing of network communications to Service Console, VMkernel, and virtual machines.

A NIC team, shown in Figure 3.20 and Figure 3.21, is defined as a vSwitch configured with an association to multiple physical network adapters (uplinks). As mentioned in the previous section, the ESX Server host can either have a maximum of 32 uplinks spread across multiple vSwitches or be configured as a NIC team on one vSwitch.

Successful NIC teaming requires that all uplinks be connected to physical switches that belong to the same broadcast domain. As shown in Figure 3.22, all of the physical network adapters in the NIC team should be connected to the same physical switch or to physical network adapters connected to physical switches that are connected to one another.

Figure 3.20 Virtual switches, like vSwitch1, with multiple uplinks offer redundancy and load balancing.

Figure 3.21 A NIC team is identified by the association of multiple physical network adapters assigned to a vSwitch.

Figure 3.22 All of the physical network adapters that make up a NIC team must belong to same Layer 2 broadcast domain.

Constructing NIC Teams