• Synthetic SCSI Controller
  • Virtual Hard Disk Types
  • Disabling File Last Access Time Check
  • Network I/O Performance
  • Synthetic Network Adapter
  • Install Multiple Synthetic Network Adapters on Multiprocessor VMs
  • Network Switch Topology
  • Interrupt Affinity System administrators can use the IntPolicy tool to bind device interrupts to specific processors. VLAN Performance
  • Performance Tuning Guidelines for Windows Server 2008 R2 October 15, 2010




    Download 0.66 Mb.
    bet16/19
    Sana26.12.2019
    Hajmi0.66 Mb.
    #5293
    1   ...   11   12   13   14   15   16   17   18   19

    Storage I/O Performance


    Hyper-V supports synthetic and emulated storage devices in VMs, but the synthetic devices generally can offer significantly better throughput and response times and reduced CPU overhead. The exception is if a filter driver can be loaded and reroutes I/Os to the synthetic storage device. Virtual hard disks (VHDs) can be backed by three types of VHD files or raw disks. This section describes the different options and considerations for tuning storage I/O performance.

    For more information, refer to “Performance Tuning for the Storage Subsystem” earlier in this guide, which discusses considerations for selecting and configuring storage hardware.


    Synthetic SCSI Controller


    The synthetic storage controller provides significantly better performance on storage I/Os with less CPU overhead than the emulated IDE device. The VM Integration Services include the enlightened driver for this storage device and are required for the guest operating system to detect it. The operating system disk must be mounted on the IDE device for the operating system to boot correctly, but the VM integration services load a filter driver that reroutes IDE device I/Os to the synthetic storage device.

    We strongly recommend that you mount the data drives directly to the synthetic SCSI controller because that configuration has reduced CPU overhead. You should also mount log files and the operating system paging file directly to the synthetic SCSI controller if their expected I/O rate is high.

    For highly intensive storage I/O workloads that span multiple data drives, each VHD should be attached to a separate synthetic SCSI controller for better overall performance. In addition, each VHD should be stored on separate physical disks.

    Virtual Hard Disk Types


    There are three types of VHD files. We recommend that production servers use fixed-sized VHD files for better performance and also to make sure that the virtualization server has sufficient disk space for expanding the VHD file at run time. The following are the performance characteristics and trade-offs between the three VHD types:

    • Dynamically expanding VHD.

    Space for the VHD is allocated on demand. The blocks in the disk start as zeroed blocks but are not backed by any actual space in the file. Reads from such blocks return a block of zeros. When a block is first written to, the virtualization stack must allocate space within the VHD file for the block and then update the metadata. This increases the number of necessary disk I/Os for the write and increases CPU usage. Reads and writes to existing blocks incur both disk access and CPU overhead when looking up the blocks’ mapping in the metadata.

    • Fixed-size VHD.

    Space for the VHD is first allocated when the VHD file is created. This type of VHD is less apt to fragment, which reduces the I/O throughput when a single I/O is split into multiple I/Os. It has the lowest CPU overhead of the three VHD types because reads and writes do not need to look up the mapping of the block.

    • Differencing VHD.

    The VHD points to a parent VHD file. Any writes to blocks never written to before result in space being allocated in the VHD file, as with a dynamically expanding VHD. Reads are serviced from the VHD file if the block has been written to. Otherwise, they are serviced from the parent VHD file. In both cases, the metadata is read to determine the mapping of the block. Reads and writes to this VHD can consume more CPU and result in more I/Os than a fixed-sized VHD.
    Snapshots of a VM create a differencing VHD to store the writes to the disks since the snapshot was taken. Having only a few snapshots can elevate the CPU usage of storage I/Os, but might not noticeably affect performance except in highly I/O-intensive server workloads.

    However, having a large chain of snapshots can noticeably affect performance because reading from the VHD can require checking for the requested blocks in many differencing VHDs. Keeping snapshot chains short is important for maintaining good disk I/O performance.


    Passthrough Disks


    The VHD in a VM can be mapped directly to a physical disk or logical unit number (LUN), instead of a VHD file. The benefit is that this configuration bypasses the file system (NTFS) in the root partition, which reduces the CPU usage of storage I/O. The risk is that physical disks or LUNs can be more difficult to move between machines than VHD files.

    Large data drives can be prime candidates for passthrough disks, especially if they are I/O intensive. VMs that can be migrated between virtualization servers (such as quick migration) must also use drives that reside on a LUN of a shared storage device.


    Disabling File Last Access Time Check


    Windows Server 2003 and earlier Windows operating systems update the last-accessed time of a file when applications open, read, or write to the file. This increases the number of disk I/Os, which further increases the CPU overhead of virtualization. If applications do not use the last-accessed time on a server, system administrators should consider setting this registry key to disable these updates.

    NTFSDisableLastAccessUpdate

    HKLM\System\CurrentControlSet\Control\FileSystem\ (REG_DWORD)

    By default, Windows Server 2008 R2 disables the last-access time updates.


    Physical Disk Topology


    VHDs that I/O-intensive VMs use generally should not be placed on the same physical disks because this can cause the disks to become a bottleneck. If possible, they should also not be placed on the same physical disks that the root partition uses. For a discussion on capacity planning for storage hardware and RAID selection, see “Performance Tuning for the Storage Subsystem” earlier in this guide.

    I/O Balancer Controls


    The virtualization stack balances storage I/O streams from different VMs so that each VM has similar I/O response times when the system’s I/O bandwidth is saturated. The following registry keys can be used to adjust the balancing algorithm, but the virtualization stack tries to fully use the I/O device’s throughput while providing reasonable balance. The first path should be used for storage scenarios, and the second path should be used for networking scenarios:

    HKLM\System\CurrentControlSet\Services\StorVsp\ = (REG_DWORD)


    HKLM\System\CurrentControlSet\Services\VmSwitch\ = (REG_DWORD)
    Both storage and networking have three registry keys at the preceding StorVsp and VmSwitch paths, respectively. Each value is a DWORD and operates as follows. We do not recommend this advanced tuning option unless you have a specific reason to use it. Note that these registry keys might be removed in future releases:

    • IOBalance_Enabled

    The balancer is enabled when set to a nonzero value and disabled when set to 0. The default is enabled for storage and disabled for networking. Enabling the balancing for networking can add significant CPU overhead in some scenarios.

    • IOBalance_KeepHwBusyLatencyTarget_Microseconds

    This controls how much work, represented by a latency value, the balancer allows to be issued to the hardware before throttling to provide better balance. The default is 83 ms for storage and 2 ms for networking. Lowering this value can improve balance but will reduce some throughput. Lowering it too much significantly affects overall throughput. Storage systems with high throughput and high latencies can show added overall throughput with a higher value for this parameter.

    • IOBalance_AllowedPercentOverheadDueToFlowSwitching

    This controls how much work the balancer issues from a VM before switching to another VM. This setting is primarily for storage where finely interleaving I/Os from different VMs can increase the number of disk seeks. The default is 8 percent for both storage and networking.

    Network I/O Performance


    Hyper-V supports synthetic and emulated network adapters in the VMs, but the synthetic devices offer significantly better performance and reduced CPU overhead. Each of these adapters is connected to a virtual network switch, which can be connected to a physical network adapter if external network connectivity is needed.

    For how to tune the network adapter in the root partition, including interrupt moderation, refer to “Performance Tuning for the Networking Subsystem” earlier in this guide. The TCP tunings in that section should be applied, if required, to the child partitions.


    Synthetic Network Adapter


    Hyper-V features a synthetic network adapter that is designed specifically for VMs to achieve significantly reduced CPU overhead on network I/O when it is compared to the emulated network adapter that mimics existing hardware. The synthetic network adapter communicates between the child and root partitions over VMBus by using shared memory for more efficient data transfer.

    The emulated network adapter should be removed through the VM settings dialog box and replaced with a synthetic network adapter. The guest requires that the VM integration services be installed.

    Perfmon counters representing the network statistics for the installed synthetic network adapters are available under the counter set \Hyper-V Virtual Network Adapter (*) \ *.

    Install Multiple Synthetic Network Adapters on Multiprocessor VMs


    Virtual machines with more than one virtual processor might benefit from having more than one synthetic network adaptor installed into the VM. Workloads that are network intensive, such as a Web server, can make use of greater parallelism in the virtual network stack if a second synthetic NIC is installed into a VM.

    Offload Hardware


    As with the native scenario, offload capabilities in the physical network adapter reduce the CPU usage of network I/Os in VM scenarios. Hyper-V currently uses LSOv1 and TCPv4 checksum offload. The offload capabilities must be enabled in the driver for the physical network adapter in the root partition. For details on offload capabilities in network adapters, refer to “Choosing a Network Adapter” earlier in this guide.

    Drivers for certain network adapters disable LSOv1 but enable LSOv2 by default. System administrators must explicitly enable LSOv1 by using the driver Properties dialog box in Device Manager.


    Network Switch Topology


    Hyper-V supports creating multiple virtual network switches, each of which can be attached to a physical network adapter if needed. Each network adapter in a VM can be connected to a virtual network switch. If the physical server has multiple network adapters, VMs with network-intensive loads can benefit from being connected to different virtual switches to better use the physical network adapters.

    Perfmon counters representing the network statistics for the installed synthetic switches are available under the counter set \Hyper-V Virtual Switch (*) \ *.


    Interrupt Affinity


    System administrators can use the IntPolicy tool to bind device interrupts to specific processors.

    VLAN Performance


    The Hyper-V synthetic network adapter supports VLAN tagging. It provides significantly better network performance if the physical network adapter supports NDIS_ENCAPSULATION_IEEE_802_3_P_AND_Q_IN_OOB encapsulation for both large send and checksum offload. Without this support, Hyper-V cannot use hardware offload for packets that require VLAN tagging and network performance can be decreased.

    VMQ


    Windows Server 2008 R2 introduces support for VMQ-enabled network adapters. These adapters can maintain a separate hardware queue for each VM, up to the limit supported by each network adapter.

    As there are limited hardware queues available, you can use the Hyper-V WMI API to ensure that the VMs that are using the network bandwidth are assigned a hardware queue.


    VM Chimney


    Windows Server 2008 R2 introduces support for VM Chimney. Network connections with long lifetimes will see the most benefit due to the increase in CPU required for connection establishment when VM Chimney is enabled.

    Live Migration


    Live migration allows you to transparently move running virtual machines from one node of the failover cluster to another node in the same cluster without a dropped network connection or perceived downtime. In addition, failover clustering requires shared storage for the cluster nodes.

    The process of moving a running virtual machine can be broken down in to two major phases. The first phase is the copying of the memory contents on the VM from the current host to the new host. The second phase is the transfer of the VM state from the current host to the new host. The durations of both phases is greatly determined by the speed at which data can be transferred from the current host to the new host.



    Providing a dedicated network for Live Migration traffic helps to minimize the time required to complete a Live Migration and it ensures consistent migration times.



    Figure 8. Example Hyper-V Live Migration Configuration

    In addition, increasing the number of receive and send buffers on each network adapter involved in the migration can improve migration performance. For more information, see “Performance Tuning for the Networking Subsystem” earlier in this guide.




    Download 0.66 Mb.
    1   ...   11   12   13   14   15   16   17   18   19




    Download 0.66 Mb.

    Bosh sahifa
    Aloqalar

        Bosh sahifa



    Performance Tuning Guidelines for Windows Server 2008 R2 October 15, 2010

    Download 0.66 Mb.