應(yīng)用到: Windows Server 2008 R2
This guide describes how to configure your network to use the live migration feature of Hyper-V?. It provides a detailed list of the networking configuration requirements for optimal performance and reliability, as well as recommendations for scenarios that do not meet these requirements.
For information about the complete requirements for using Hyper-V with Cluster Shared Volumes, see Hyper-V: Using Live Migration with Cluster Shared Volumes in Windows Server 2008 R2 (http://go.microsoft.com/fwlink/?LinkId=164729).
The following recommendations will help you configure your networking environment for using live migration:
When your environment does not have the full set of network configuration requirements, the following list identifies some additional scenarios for using live migration:
Live migration requires failover clustering and shared storage. It is recommended that you use Cluster Shared Volumes on an iSCSI or a Fibre Channel storage area network (SAN) or on Serial Attached SCSI (SAS) storage to provide shared access for optimal manageability. The following table provides a list of the network access recommendations for Hyper-V.
Network access type | Purpose of the network access type | Network traffic requirements | Recommended network access |
---|---|---|---|
Storage | Access storage through iSCSI or Fibre Channel (Fibre Channel does not need a network adapter). | High bandwidth and low latency. | Usually, dedicated and private access. Refer to your storage vendor for guidelines. |
Virtual machine access | Workloads running on virtual machines usually require external network connectivity to service client requests. | Varies | Public access which could be teamed for link aggregation or to fail over the cluster. |
Management | Managing the Hyper-V management operating system. This network is used by Hyper-V Manager or System Center Virtual Machine Manager (VMM). | Low bandwidth | Public access, which could be teamed to fail over the cluster. |
Cluster and Cluster Shared Volumes | Preferred network used by the cluster for communications to maintain cluster health. Also, used by Cluster Shared Volumes to send data between owner and non-owner nodes. If storage access is interrupted, this network is used to access the Cluster Shared Volumes or to maintain and back up the Cluster Shared Volumes. The cluster should have access to more than one network for communication to ensure the cluster is highly available. | Usually low bandwidth and low latency. Occasionally, high bandwidth. | Private access |
Live migration | Transfer virtual machine memory and state. | High bandwidth and low latency during migrations. | Private access |
When planning a network configuration to use for live migration, you should review the following list of networking recommendations. These will help you design the best possible configuration for environments that do not have the required hardware configuration for live migration.
The following table details the recommended, supported, and not recommended network configurations for live migration, and is organized in the order in which each network configuration is commonly used. Before reviewing the table, note the following:
Host configuration | Virtual machine access | Management | Cluster and Cluster Shared Volumes | Live migration | Comments |
---|---|---|---|---|---|
4 network adapters with 1 Gbps | Virtual network adapter 1 | Network adapter 2 | Network adapter 3 | Network adapter 4 | Recommended |
3 network adapters with 1 Gbps; 2 adapters are teamed for link aggregation (private) | Virtual network adapter 1 | Virtual network adapter 1 with bandwidth capped at 10% | Network adapter 2 (teamed) | Network adapter 2 with bandwidth capped at 40% (teamed) | Supported |
3 network adapters with 1 Gbps | Virtual network adapter 1 | Virtual network adapter 1 with bandwidth capped at 10% | Network adapter 2 | Network adapter 3 | Supported |
2 network adapters with 10 Gbps | Virtual network adapter 1 | Virtual network adapter 1 with bandwidth capped at 1% | Network adapter 2 | Network adapter 2 with bandwidth capped at 50% | Supported* |
2 network adapters with 10 Gbps; 1 network adapter with 1 Gbps | Virtual network adapter 1 (10 Gbps) | Network adapter 2 (1 Gbps) | Network adapter 3 (10 Gbps) | Network adapter 3 with bandwidth capped at 50% | Supported |
2 network adapters with 10 Gbps; 2 network adapters with 1 Gbps | Virtual network adapter 1 (10 Gbps) | Network adapter 2 (1 Gbps) | Network adapter 3 (1 Gbps) | Network adapter 4 (10 Gbps) | Supported |
3 network adapters with 1 Gbps; 2 adapters are teamed for link aggregation (public) | Virtual network adapter 1 | Virtual network adapter 1 with bandwidth capped at 5% | Network adapter 2 (teamed) | Network adapter 2 with bandwidth capped at 90% (teamed) | Not recommended |
2 network adapters with 1 Gbps | Virtual network adapter 1 | Virtual network adapter 1 with bandwidth capped at 10% | Network adapter 2 | Network adapter 2 with bandwidth capped at 90% | Not recommended |
1 network adapter with 10 Gbps; 1 network adapter with 1 Gbps | Virtual network adapter 1 (10 Gbps) | Virtual network adapter 1 with bandwidth capped at 10% | Network adapter 2 (1 Gbps) | Network adapter 2 with bandwidth capped at 90% | Not recommended |
*This configuration is considered recommended if your configuration has a redundant network path available for Cluster and Cluster Shared Volumes communication.
You can manage the use of the bandwidth that is configured for live migration by configuring a QoS policy to limit TCP traffic on port 6600, which is the port that is used for live migration. This ensures that network traffic on TCP port 6600 does not exceed the limit you set. Use the procedure below to create a QoS policy to do this.
Click Start, type gpedit.msc in the Search programs and files box, and then press ENTER.
In the console tree under Local Computer Policy, expand Computer Configuration, expand Windows Settings, right-click Policy-based QoS, and then click Create new Policy.
In Policy-based QoS, under Policy name, type a name for the QoS policy that uniquely identifies the policy.
In Specify Outbound Throttle Rate, select the check box to enable throttling for outbound traffic, and then specify a value greater than 1 in kilobytes per second (KBps) or megabytes per second (MBps). For example, enter 115 MBps to limit the traffic for live migration so that it does not exceed 90% of 1 Gbps. Click Next.
In IP Addresses, by default, Any source IP address and Any destination IP address are selected. Click Next.
In Application Name, by default, All applications is selected. This applies the throttle rate that you specified to outbound traffic, regardless of the application you are using. Click Next.
In Protocols and Ports, under Select the protocol this QoS policy applies to:, select TCP from the drop-down menu. This applies the throttle rate that you specified to outbound TCP traffic.
Under Select the source port number:, select From any source port. This applies the throttle rate to outbound traffic, regardless of the source port number of the traffic.
Under Specify the destination port number:, select To this destination port number or range, and then specify 6600 as the port number. This applies the throttle rate that you specified only to traffic with the destination port number or range you specify.
You can configure a QoS policy to limit the network traffic for the IP address used by the management operating system. This ensures that network traffic outbound from the specified IP address does not exceed the limit you set. Use the procedure below to create a QoS policy to do this.
Follow Steps 1 through 5 in the procedure “To create a QoS policy to limit live migration traffic.”
In IP Addresses, under This QoS policy applies to:, select Only for the following source IP address or prefix:, and then type the IP address for the management operating system. This applies the throttle rate that you specified to outbound traffic from a source IP address that you specify.
聯(lián)系客服