• Storage I/O Performance
  • Synthetic SCSI Controller
  • Virtual Hard Disk Types
  • Disabling File Last Access Time Check
  • Network I/O Performance
  • Synthetic Network Adapter
  • Multiple Synthetic Network Adapters on Multiprocessor VMs
  • Network Switch Topology
  • Performance Tuning Guidelines for Windows Server 2008 May 20, 2009




    Download 399,57 Kb.
    bet15/16
    Sana26.12.2019
    Hajmi399,57 Kb.
    #5294
    1   ...   8   9   10   11   12   13   14   15   16

    Memory Performance


    The hypervisor virtualizes the guest physical memory to isolate VMs from each other and provide a contiguous, zero-based memory space for each guest operating system. Memory virtualization can increase the CPU cost of accessing memory, especially when applications frequently modify the virtual address space in the guest operating system because of frequent allocations and deallocations.

    Enlightened Guests


    Windows Server 2008 includes kernel enlightenments and optimizations to the memory manager to reduce the CPU overhead from Hyper‑V memory virtualization. Workloads that have a large working set in memory can benefit from using Windows Server 2008 as a guest. These enlightenments reduce the CPU cost of context switching between processes and accessing memory. Additionally, they improve the multiprocessor (MP) scalability of Windows Server 2008 guests.

    Correct Memory Sizing


    You should size VM memory as you typically do for server applications on a physical machine. You must size it to reasonably handle the expected load at ordinary and peak times because insufficient memory can significantly increase response times and CPU or I/O usage. In addition, the root partition must have sufficient memory (leave at least 512 MB available) to provide services such as I/O virtualization, snapshot, and management to support the child partitions.

    A good standard for the memory overhead of each VM is 32 MB for the first 1 GB of virtual RAM plus another 8 MB for each additional GB of virtual RAM. This should be factored in the calculations of how many VMs to host on a physical server. The memory overhead varies depending on the actual load and amount of memory that is assigned to each VM.


    Storage I/O Performance


    Hyper‑V supports synthetic and emulated storage devices in VMs, but the synthetic devices generally can offer significantly better throughput and response times and reduced CPU overhead. The exception is if a filter driver can be loaded and reroutes I/Os to the synthetic storage device. Virtual hard disks (VHDs) can be backed by three types of VHD files or raw disks. This section describes the different options and considerations for tuning storage I/O performance.

    For more information, refer to “Performance Tuning for the Storage Subsystem” earlier in this guide, which discusses considerations for selecting and configuring storage hardware.


    Synthetic SCSI Controller


    The synthetic storage controller provides significantly better performance on storage I/Os with reduced CPU overhead than the emulated IDE device. The VM integration services include the enlightened driver for this storage device and are required for the guest operating system to detect it. The operating system disk must be mounted on the IDE device for the operating system to boot correctly, but the VM integration services load a filter driver that reroutes IDE device I/Os to the synthetic storage device.

    We strongly recommend that you mount the data drives directly to the synthetic SCSI controller because that configuration has reduced CPU overhead. You should also mount log files and the operating system paging file directly to the synthetic SCSI controller if their expected I/O rate is high.

    For highly intensive storage I/O workloads that span multiple data drives, each Virtual Hard Disk (VHD) should be attached to a separate synthetic SCSI controller for better overall performance. In addition, each VHD should be stored on separate physical disks.

    Virtual Hard Disk Types


    There are three types of VHD files. We recommend that production servers use fixed-sized VHD files for better performance and also to make sure that the virtualization server has sufficient disk space for expanding the VHD file at run time. The following are the performance characteristics and trade-offs between the three VHD types:

    • Dynamically expanding VHD.

    Space for the VHD is allocated on demand. The blocks in the disk start as zeroed blocks but are not backed by any actual space in the file. Reads from such blocks return a block of zeros. When a block is first written to, the virtualization stack must allocate space within the VHD file for the block and then update the metadata. This increases the number of necessary disk I/Os for the write and causes an increased CPU usage. Reads and writes to existing blocks incur both disk access and CPU overhead when looking up the blocks’ mapping in the metadata.

    • Fixed-size VHD.

    Space for the VHD is first allocated when the VHD file is created. This type of VHD is less apt to fragment, which reduces the I/O throughput when a single I/O is split into multiple I/Os. It has the lowest CPU overhead of the three VHD types because reads and writes do not need to look up the mapping of the block.

    • Differencing VHD.

    The VHD points to a parent VHD file. Any writes to blocks never written to before result in space being allocated in the VHD file, as with a dynamically expanding VHD. Reads are serviced from the VHD file if the block has been written to. Otherwise, they are serviced from the parent VHD file. In both cases, the metadata is read to determine the mapping of the block. Reads and writes to this VHD can consume more CPU and result in more I/Os than a fixed-sized VHD.
    Snapshots of a VM create a differencing VHD to store the writes to the disks since the snapshot was taken. Having only a few snapshots can elevate the CPU usage of storage I/Os, but might not noticeably affect performance except in highly I/O-intensive server workloads.

    However, having a large chain of snapshots can noticeably affect performance because reading from the VHD can require checking for the requested blocks in many differencing VHDs. Keeping snapshot chains short is important for maintaining good disk I/O performance.


    Passthrough Disks


    The VHD in a VM can be mapped directly to a physical disk or logical unit number (LUN), instead of a VHD file. The benefit is that this configuration bypasses the file system (NTFS) in the root partition, which reduces the CPU usage of storage I/O. The risk is that physical disk or LUNs can be more difficult to move between machines than VHD files.

    Large data drives can be prime candidates for passthrough disks, especially if they are I/O intensive. VMs that can be migrated between virtualization servers (such as quick migration) must also use drives that reside on a LUN of a shared storage device.


    Disabling File Last Access Time Check


    Windows Server 2003 and earlier Windows operating systems update the last-accessed time of a file when applications open, read, or write to the file. This increases the number of disk I/Os, which further increases the CPU overhead of virtualization. If applications do not use the last-accessed time on a server, system administrators should consider setting this registry key to disable these updates.

    NTFSDisableLastAccessUpdate

    HKLM\System\CurrentControlSet\Control\FileSystem\ (REG_DWORD)

    By default, both Windows Vista and Windows Server 2008 disable the last-access time updates.


    Physical Disk Topology


    VHDs that I/O-intensive VMs use generally should not be placed on the same physical disks because the disks can otherwise become a bottleneck. If possible, they should also not be placed on the same physical disks that the root partition uses. For a discussion on capacity planning for storage hardware and RAID selection, see “Performance Tuning for the Storage Subsystem” earlier in this guide.

    I/O Balancer Controls


    The virtualization stack balances storage I/O streams from different VMs so that each VM has similar I/O response times when the system’s I/O bandwidth is saturated. The following registry keys can be used to adjust the balancing algorithm, but the virtualization stack tries to fully use the I/O device’s throughput while providing reasonable balance. The first path should be used for storage scenarios, and the second path should be used for networking scenarios:

    HKLM\System\CurrentControlSet\Services\StorVsp\ = (REG_DWORD)

    HKLM\System\CurrentControlSet\Services\VmSwitch\ = (REG_DWORD)

    Both storage and networking have three registry keys at the preceding StorVsp and VmSwitch paths, respectively. Each value is a DWORD and operates as follows. We do not recommend this advanced tuning option unless you have a specific reason to use it. Note that these registry keys might be removed in future releases:



    • IOBalance_Enabled

    The balancer is enabled when set to a nonzero value and disabled when set to 0. The default is enabled for storage and disabled for networking. Enabling the balancing for networking can add significant CPU overhead in some scenarios.

    • IOBalance_KeepHwBusyLatencyTarget_Microseconds

    This controls how much work, represented by a latency value, the balancer allows to be issued to the hardware before throttling to provide better balance. The default is 83 ms for storage and 2 ms for networking. Lowering this value can improve balance but will reduce some throughput. Lowering it too much significantly affects overall throughput. Storage systems with high throughput and high latencies can show added overall throughput with a higher value for this parameter.

    • IOBalance_AllowedPercentOverheadDueToFlowSwitching

    This controls how much work the balancer issues from a VM before switching to another VM. This setting is primarily for storage where finely interleaving I/Os from different VMs can increase the number of disk seeks. The default is 8 percent for both storage and networking.

    Network I/O Performance


    Hyper‑V supports synthetic and emulated network adapters in the VMs, but the synthetic devices offer significantly better performance and reduced CPU overhead. Each of these adapters is connected to a virtual network switch, which can be connected to a physical network adapter if external network connectivity is needed.

    For how to tune the network adapter in the root partition, including interrupt moderation, refer to “Performance Tuning for the Networking Subsystem” earlier in this guide. The TCP tunings in that section should be applied, if required, to the child partitions.


    Synthetic Network Adapter


    Hyper‑V features a synthetic network adapter that is designed specifically for VMs to achieve significantly reduced CPU overhead on network I/O when it is compared to the emulated network adapter that mimics existing hardware. The synthetic network adapter communicates between the child and root partitions over VMBus by using shared memory for more efficient data transfer.

    The emulated network adapter should be removed through the VM settings dialog box and replaced with a synthetic network adapter. The guest requires that the VM integration services be installed.


    Multiple Synthetic Network Adapters on Multiprocessor VMs


    Virtual machines with more than one virtual processor might benefit from having more than one synthetic network adaptor installed into the VM. Workloads that are network intensive, such as a Web server, can make use of greater parallelism in the virtual network stack if a second synthetic NIC is installed into the VM.

    Offload Hardware


    As with the native scenario, offload capabilities in the physical network adapter reduce the CPU usage of network I/Os in VM scenarios. Hyper‑V currently uses LSOv1 and TCPv4 checksum offload. The offload capabilities must be enabled in the driver for the physical network adapter in the root partition. For details on offload capabilities in network adapters, refer to “Choosing a Network Adapter” earlier in this guide.

    Drivers for certain network adapters disable LSOv1 but enable LSOv2 by default. System administrators must explicitly enable LSOv1 by using the driver Properties dialog box in Device Manager.


    Network Switch Topology


    Hyper‑V supports creating multiple virtual network switches, each of which can be attached to a physical network adapter if needed. Each network adapter in a VM can be connected to a virtual network switch. If the physical server has multiple network adapters, VMs with network-intensive loads can benefit from being connected to different virtual switches to better use the physical network adapters.

    Interrupt Affinity


    Under certain workloads, binding the device interrupts for a single network adapter to a single logical processor can improve performance for Hyper‑V. We recommend this advanced tuning only to address specific problems in fully using network bandwidth. System administrators can use the IntPolicy tool to bind device interrupts to specific processors.

    VLAN Performance


    The Hyper‑V synthetic network adapter supports VLAN tagging. It provides significantly better network performance if the physical network adapter supports NDIS_ENCAPSULATION_IEEE_802_3_P_AND_Q_IN_OOB encapsulation for both large send and checksum offload. Without this support, Hyper‑V cannot use hardware offload for packets that require VLAN tagging and network performance can be decreased.


    Download 399,57 Kb.
    1   ...   8   9   10   11   12   13   14   15   16




    Download 399,57 Kb.

    Bosh sahifa
    Aloqalar

        Bosh sahifa



    Performance Tuning Guidelines for Windows Server 2008 May 20, 2009

    Download 399,57 Kb.