Выбрать главу

Though the VMkernel does not allow looping because it does not allow vSwitches to be interconnected, it would be possible to manually create looping by connecting a virtual machine with two network adapters to two different vSwitches and then bridging the virtual network adapters. Since looping is a common network problem, it is a benefit for network administrators that vSwitches don't allow looping and prevent it from happening. 

ESX Server allows for the configuration of three types of virtual switches. The type of switch is dependent on the association of a physical network adapter with the virtual switch. The three types of vSwitches, as shown in Figure 3.2 and Figure 3.3, include:

♦ Internal-only virtual switch

♦ Virtual switch bound to a single network adapter

♦ Virtual switch bound to two or more network adapters

Figure 3.2 Virtual switches provide the cornerstone of virtual machine, VMkernel, and Service Console communication.

Figure 3.3 Virtual switches created on an ESX Server host manage all the different forms of communication required in a virtual infrastructure.

Virtual Switch Configuration Maximums

The maximum number of vSwitches for an ESX Server host is 127. Virtual switches created through the VI Client will be provided default names of vSwitch#, where # begins with 0 and increases sequentially to 127.

 Creating and Configuring Virtual Switches 

By default every virtual switch is created with 64 ports. However, only 56 of the ports are available and only 56 are displayed when looking at a vSwitch configuration through the Virtual Infrastructure client. Reviewing a vSwitch configuration via the esxcfg-vswitch command shows the entire 64 ports. The 8 port difference is attributed to the fact that the VMkernel reserves these 8 ports for its own use.

Once a virtual switch has been created the number of ports can be adjusted to 8, 24, 56, 120, 248, 504, or 1016. These are the values that are reflected in the Virtual Infrastructure client. But, as noted, there are 8 ports reserved and therefore the command line will show 32, 64, 128, 256, 512, and 1,024 ports for virtual switches.

Changing the number of ports in a virtual switch requires reboot of the ESX Server 3.5 host on which the virtual switch was altered. 

Without an uplink, the internal-only vSwitch allows communication only between virtual machines that exist on the same ESX Server host. Virtual machines that communicate through an internal-only vSwitch do not pass any traffic through a physical adapter on the ESX Server host. As shown in Figure 3.4, communication between virtual machines connected to an internal-only switch takes place entirely in software and happens at whatever speed the VMkernel can perform the task.

Figure 3.4 Virtual machines communicating through an internal-only vSwitch do not pass any traffic through a physical adapter.

No Uplink, No VMotion 

Virtual machines connected to an internal-only vSwitch are not VMotion capable. However, if the virtual machine is disconnected from the internal-only vSwitch, a warning will be provided but VMotion will succeed if all other requirements have been met. The requirements for VMotion will be covered in Chapter 9. 

For virtual machines to communicate with resources beyond the virtual machines hosted on the local ESX server, a vSwitch must be configured to use a physical network adapter, or uplink. A vSwitch bound to a single physical adapter allows virtual machines to establish communication with physical servers on the network, or with virtual machines on other ESX Server hosts that are also connected to a vSwitch bound to a physical adapter. The vSwitch associated with a physical network adapter provides virtual machines with the amount of bandwidth the physical adapter is configured to support. For example, a vSwitch bound to a network adapter with a 1Gbps maximum speed will provide up to 1Gbps worth of bandwidth for the virtual machines connected to it. Figure 3.5 displays the communication path for virtual machines connected to a vSwitch bound to a single network adapter. In the diagram, when VM01 on silo104 needs to communicate with VM02 on silo105, the traffic from the virtual machine is passed through the ProductionLAN virtual switch (port group) in the VMkernel on silo104 to the physical network adapter to which the virtual switch is bound. From the physical network adapter, the traffic will reach the physical switch (PhySw1). The physical switch (PhySw1) passes the traffic to the second physical switch (PhySw2), which will pass the traffic through the physical network adapter associated with the Production-LAN virtual switch on silo105. In the last stage of the communication, the virtual switch will pass the traffic to the destination virtual machine VM02.

Figure 3.5 The vSwitch with a single network adapter allows virtual machines to communicate with physical servers and other virtual machines on the network.

Network Adapters and Network Discovery

Discovering the network adapters, and even the networks they are connected to, is easy with the VI Client. While connected to a VirtualCenter server or an individual ESX Server host, the Network Adapters node of the Configuration tab of a host will display all the available adapters. The following image shows how each adapter will be listed with information about the model of the adapter, the vmnic# label, the speed and duplex setting, its association to a vSwitch, and a discovery of the IP addresses it has found on the network:

The network IP addresses listed under the Networks column are a result of a discovery across that network. The IP address range may change as new addresses are added and removed. For those NICs without a range, no IP addresses have been discovered. The network IP addresses should be used for information only and should be verified since they can be inaccurate.

The last type of virtual switch is referred to as a NIC team. As shown in Figure 3.6 and Figure 3.7, a NIC team involves the association of multiple physical network adapters with a single vSwitch. A vSwitch configured as a NIC team can consist of a maximum of 32 uplinks. In other words, a single vSwitch can use up to 32 physical network adapters to send and receive traffic from the physical switches. NIC teams offer the advantage of redundancy and load distribution. Later in this chapter, we will dig deeper into the configuration and workings of the NIC team.

Uplink Limits

Although a single vSwitch can be associated with multiple physical adapters as in a NIC team, a single physical adapter cannot be associated with multiple vSwitches. Pending the expansion capability, ESX Server 3.0.1 hosts can have up to 26 e100 network adapters, 32 e1000 network adapters, or 20 Broadcom network adapters. The maximum number of Ethernet ports on an ESX Server host is 32, regardless of the expansion slots or the number of adapters used to reach the maximum. For example, using eight quad-port NICs would achieve the 32-port maximum. All 32 ports could be configured for use by a single vSwitch or could be spread across multiple vSwitches. 

Figure 3.6 A vSwitch with a NIC team has multiple available adapters for data transfer. A NIC team offers redundancy and load distribution.