• Receive-Side Scaling (RSS)
  • Message-Signaled Interrupts (MSI/MSI-X)
  • Network Adapter Resources
  • Suggested Network Adapter Features for Server Roles
  • Tuning the Network Adapter
  • Enabling Offload Features
  • Increasing Network Adapter Resources
  • Enabling Interrupt Moderation
  • Binding Each Adapter to a CPU
  • TCP Receive Window Auto-Tuning
  • Performance Tuning Guidelines for Windows Server 2008 May 20, 2009




    Download 399.57 Kb.
    bet4/16
    Sana26.12.2019
    Hajmi399.57 Kb.
    #5294
    1   2   3   4   5   6   7   8   9   ...   16

    Choosing a Network Adapter


    Network-intensive applications need high-performance network adapters. This section covers some considerations for choosing network adapters.

    Offload Capabilities


    Offloading tasks can reduce CPU usage on the server, which improves overall system performance. The Microsoft network stack can offload one or more tasks to a network adapter if you choose one that has the appropriate offload capabilities. Table 4 provides more details about each offload capability.

    Table 4. Offload Capabilities for Network Adapters



    Offload type

    Description

    Checksum calculation

    The network stack can offload the calculation and validation of both Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) checksums on sends and receives. It can also offload the calculation and validation of both IPv4 and IPv6 checksums on sends and receives.

    IP security authentication and encryption

    The TCP/IP transport can offload the calculation and validation of encrypted checksums for authentication headers and Encapsulating Security Payloads (ESPs). The TCP/IP transport can also offload the encryption and decryption of ESPs.

    Segmentation of large TCP packets

    The TCP/IP transport supports Large Send Offload v2 (LSOv2). With LSOv2, the TCP/IP transport can offload the segmentation of large TCP packets to the hardware.

    TCP stack

    The TCP offload engine (TOE) enables a network adapter that has the appropriate capabilities to offload the entire network stack.

    Receive-Side Scaling (RSS)


    On systems with Pentium 4 and later processors, all processing for network I/O within the context of an ISR is routed to the same processor. This behavior differs from that of earlier processors in which interrupts from a device are rotated to all processors. The result is a scalability limitation for multiprocessor servers that host a single network adapter that is governed by the processing power of a single CPU. With RSS, the network driver together with the network card distributes incoming packets among processors so that packets that belong to the same TCP connection are on the same processor, which preserves ordering. This helps improve scalability for scenarios such as Web servers, in which a machine accepts many connections that originate from different source addresses and ports. Research shows that distributing packets that belong to TCP connections across hyperthreading processors degrades performance. Therefore, only physical processors accept RSS traffic. For more information about RSS, see “Scalable Networking: Eliminating the Receive Processing Bottleneck—Introducing RSS” in "Resources".

    Message-Signaled Interrupts (MSI/MSI-X)


    Network adapters that support MSI/MSI-X can target their interrupts to specific processors. If the adapters also support RSS, then a processor can be dedicated to servicing interrupts and DPCs for a given TCP connection. This preserves the cache locality of TCP structures and greatly improves performance.

    Network Adapter Resources


    A few network adapters actively manage their resources to achieve optimum performance. Several network adapters let the administrator manually configure resources by using the Advanced Networking tab for the adapter. For such adapters, you can set the values of a number of parameters including the number of receive buffers and send buffers.

    Interrupt Moderation


    To control interrupt moderation, some network adapters either expose different interrupt moderation levels, or buffer coalescing parameters (sometimes separately for send and receive buffers), or both. You should consider buffer coalescing or batching when the network adapter does not perform interrupt moderation.

    Suggested Network Adapter Features for Server Roles


    Table 5 lists high-performance network adapter features that can improve throughput, latency, or scalability for some server roles.

    Table 5. Benefits from Network Adapter Features for Different Server Roles



    Server role

    Checksum offload

    Segmentation offload

    TCP offload engine (TOE)

    Receive-side scaling (RSS)

    File server

    X

    X

    X

    X

    Web server

    X

    X

    X

    X

    Mail server (short-lived connections)

    X







    X

    Database server

    X

    X

    X

    X

    FTP server

    X

    X

    X




    Media server

    X




    X

    X


    Disclaimer: The recommendations in Table 5 are intended to serve as guidance only for choosing the most suitable technology for specific server roles under a deterministic traffic pattern. User experience can be different, depending on workload characteristics and the hardware that is used.

    If your hardware supports TOE, then you must enable that option in the operating system to benefit from the hardware’s capability. You can enable TOE by running the following:



    netsh int tcp set global chimney = enabled

    Tuning the Network Adapter


    You can optimize network throughput and resource usage by tuning the network adapter, if any tuning options are exposed by the adapter. Remember that the correct tuning settings depend on the network adapter, the workload, the host computer resources, and your performance goals.

    Enabling Offload Features


    Turning on network adapter offload features is usually beneficial. Sometimes, however, the network adapter is not powerful enough to handle the offload capabilities at high throughput. For example, enabling segmentation offload can reduce the maximum sustainable throughput on some network adapters because of limited hardware resources. However, if the reduced throughput is not expected to be a limitation, you should enable offload capabilities even for such network adapters. Note that some network adapters require offload features to be independently enabled for send and receive paths.

    Increasing Network Adapter Resources


    For network adapters that allow for the manual configuration of resources such as receive and send buffers, you should increase the allocated resources. Some network adapters set their receive buffers low to conserve allocated memory from the host. The low value results in dropped packets and decreased performance. Therefore, for receive-intensive scenarios, we recommend that you increase receive buffer value to the maximum. If the adapter does not expose manual resource configuration, then it either dynamically configures the resources or it is set to a fixed value that cannot be changed.

    Enabling Interrupt Moderation


    To control interrupt moderation, some network adapters expose different interrupt moderation levels, buffer coalescing parameters (sometimes separately for send and receive buffers), or both. You should consider interrupt moderation for CPU-bound workloads and consider the trade‑off between the host CPU savings and latency versus the increased host CPU savings because of more interrupts and less latency. If the network adapter does not perform interrupt moderation but does expose buffer coalescing, then increasing the number of coalesced buffers allows for more buffers per send or receive, which improves performance.

    Binding Each Adapter to a CPU


    The method to use depends on the number of network adapters, the number of CPUs, and the number of ports per network adapter. Important factors are the type of workload and the distribution of the interrupts across the CPUs. For a workload such as a Web server that has several networking adapters, partition the adapters on a processor basis to isolate the interrupts that the adapters generate.

    TCP Receive Window Auto-Tuning


    One of the most significant changes to the TCP stack for this release is TCP receive window auto-tuning, which can affect existing network infrastructure demands. Previously, the network stack used a fixed-size receive-side window that limited the overall potential throughput for connections. You can calculate the total throughput of a single connection when you use this fixed size default as:

    Total achievable throughput in bytes = TCP window * (1 / connection latency)


    For example, the total achievable throughput is only 51 Mbps on a 1-GB connection with a 10-ms latency (a reasonable value for a large corporate network infrastructure). With auto-tuning, however, the receive-side window is adjustable and can grow to meet the demands of the sender. It is entirely possible for a connection to achieve a full line rate of a 1‑GB connection. Network usage scenarios that might have been limited in the past by the total achievable throughput of TCP connections now can fully use the network.

    Remote file copy is a common network usage scenario that is likely to increase demand on the infrastructure because of this change. Many improvements have been made to the underlying operating system support for remote file copy that now let large file copies perform at disk I/O speeds. If many concurrent remote large file copies are typical within your network environment, your network infrastructure might be taxed by the significant increase in network usage by each file copy operation.



    Windows Filtering Platform

    The Windows Filtering Platform (WFP) that was introduced in Windows Vista and Windows Server 2008 provides APIs to third-party independent software vendors (ISVs) to create packet processing filters. Examples include firewall and antivirus software. Note that a poorly written WFP filter significantly decreases a server’s networking performance. For more information about WFP, see "Resources".




    Download 399.57 Kb.
    1   2   3   4   5   6   7   8   9   ...   16




    Download 399.57 Kb.

    Bosh sahifa
    Aloqalar

        Bosh sahifa



    Performance Tuning Guidelines for Windows Server 2008 May 20, 2009

    Download 399.57 Kb.