• Management of iSCSI
  • Storage Performance and iSCSI General Performance Comments
  • Improving iSCSI Storage Performance
  • Network Infrastructure Settings
  • Microsoft Scalable Networking Pack
  • Receive-side Scaling
  • TCP Offload Adapters
  • Full iSCSI Host Bus Adapters (HBA)
  • Performance Result Summary by Initiator Network Adapter Type
  • Deploying iscsi storage Solutions on the Microsoft® Windows Server™ Platform




    Download 223.98 Kb.
    bet5/15
    Sana21.03.2017
    Hajmi223.98 Kb.
    1   2   3   4   5   6   7   8   9   ...   15

    Multi-Path I/O


    Microsoft MPIO is supported with iSCSI Storage Area Networks as well as Fibre Channel and Serial Attached SCSI (SAS) storage. Microsoft includes a Microsoft iSCSI Device Specific Module (DSM) with the Microsoft iSCSI Software Initiator which supports many arrays and allows the creation of multiple paths for failover and load balancing. Storage array vendors can also license the Microsoft MPIO DDK from Microsoft and implement their own DSMs specific allowing their storage to interface to the Microsoft MPIO core driver stack. The Microsoft iSCSI initiator can be installed with Microsoft MPIO, the same MPIO that is available for other types of storage. Multi-path I/O provides the benefits of fail-over if a path fails and load balancing across multiple active paths to increase throughput.

    It is important to note that when using multi-path I/O for iSCSI storage solutions, both the iSCSI initiator and the iSCSI target need to support MPIO. Each network adapter and its associated ports in the iSCSI initiator and iSCSI target should have identical features to insure consistent performance. The iSCSI DSM implements several load balance policies designed for different link performance metrics.


    Management of iSCSI


    The iSCSI solutions discussed in this report are managed using the Microsoft iSCSI initiator. The storage volumes can be managed using standard Windows tools such as “Disk Management”. In addition, most iSCSI target storage solutions provide Microsoft VSS and VDS hardware providers and can be managed with Microsoft Storage Manager for SANs (SMfS), which is available in Windows Server 2003 R2.

    Storage Performance and iSCSI

    General Performance Comments


    One of the concerns about iSCSI is the overall performance of the solution, including the load on the host CPU, the iSCSI target performance, and the Ethernet network performance, especially during periods of heavy I/O. Although this report is not intended to be an exhaustive performance benchmark, some of these performance issues will be discussed.

    iSCSI solutions are a blend of traditional network and traditional storage technologies, and most of the iSCSI storage solutions are pre-configured to provide good overall network and storage performance. Administrators may choose to fine-tune various advanced network and storage settings for additional performance or configuration purposes.

    The implementations discussed for the various iSCSI target solutions were intentionally disparate to illustrate the variety of ways in which iSCSI targets can be deployed and the configurations discussed for individual products were not necessarily optimized for performance. As a result, the performance of the iSCSI target solutions varied widely, due to the variety of designs and components used. These storage solutions used a variety of storage devices, including SATA disk drives, SCSI (parallel) and SAS disk drives. The disk drives spun at various RPM including 7200, 10K and 15K RPM. Each storage array had a different number of disk drives in the array. Different RAID stripe sizes were used with different arrays. Various disk subsystem caching designs were used, not all of which have been publicly disclosed.

    Basic I/O tests were performed with IOMeter, an open-source I/O load generator. The same group of block sizes and I/O patterns was tested with each iSCSI target solution; however the queue depth was varied as an additional data point. Some of the iSCSI target solutions supported multi-path I/O, and where possible, multiple paths were used. The deployment scenarios outlined below include up to 2 sessions. Although the purpose of these tests and this report is not to be a head-to-head performance comparison of the iSCSI target solutions, performance was measured in order to provide some general reference points for the expected performance range of iSCSI target solutions.

    General (lot. generalis - umumiy, bosh) - qurolli kuchlardagi harbiy unvon (daraja). Dastlab, 16-a.da Fransiyada joriy qilingan. Rossiyada 17-a.ning 2-yarmidan maʼlum. Oʻzbekiston qurolli kuchlarida G.
    Some interesting reference points comparing various types of network adapters in the host servers (iSCSI initiators) were also made. The IOMeter test results for each iSCSI target solution are included in their respective sections. Readers should take notice that IOMeter testing is by no mean a substitute for workload testing and modeling. In addition, tools from Microsoft such as LoadSim for Microsoft Exchange and SQLIO and SQLIOSim for SQL Server can be used to test how an iSCSI initiator and target respond for those particular applications.



    IMPORTANT NOTE: the iSCSI targets presented in this white paper are different in class, price, and disk I/O characteristics, so head-to-head comparison of the iSCSI targets in the context of this report is not possible. In addition, the tests were run with different parameters to emphasize that this report is not a benchmark report.

    Improving iSCSI Storage Performance


    Performance improvements for iSCSI solutions can be determined by measuring either the increase in absolute network throughput or the reduction in system resources such as CPU utilization. Benefits may vary depending on the applications. Application performance improvements may depend on the network packet size and/or storage block size in use.

    There are several areas that can be adjusted to improve iSCSI initiator performance on Microsoft Windows host platforms. It should be noted that several of these items listed below will improve general network performance as well as iSCSI initiator storage performance.



    • Network Infrastructure Settings

    • Microsoft Scalable Networking Pack

    • Receive-side Scaling

    • TCP Offload adapters

    • Full iSCSI Host Bus Adapters (HBA)

    Network Infrastructure Settings


    Many network cards have various feature settings that can improve performance. Not all of these features are available on all network adapters. Jumbo Frames, TCP Checksum Offload and Large Send Offload can be enabled to improve performance. Windows Server 2003 is the first Windows platform that supports network adapters that include hardware TCP Checksum Offload and Large Send Offload features.

    In the case of Microsoft Windows Server-based iSCSI target solutions, the network interface adapter settings should be examined on both the iSCSI initiator and the iSCSI target solution. It may be possible to have one side of the iSCSI communication highly optimized and the other side not optimized, resulting in reduced performance. The network features discussed in this section should be examined on the iSCSI initiator and, where possible, the iSCSI target. Implementations and impacts of these features on the iSCSI target may vary.

    Network switches should have Jumbo Frames enabled. Flow control may also need to be enabled in the switch and network adapters if there is heavy network congestion. Enterprise switches are generally designed to be used in higher-traffic networks and are better choices than low-cost switches for iSCSI traffic.

    Microsoft Scalable Networking Pack


    With the gaining popularity of multi-core and multi-processor systems, deployment of the Microsoft Scalable Networking Pack with advanced, server-class network adapters is highly recommended.

    Microsoft makes the Scalable Networking Pack (SNP) available as a free download for Microsoft Windows 2003 Server (32-bit and 64-bit) and for Windows XP 64-bit platforms. It is also an integrated component within Windows Server 2003 R2 Service Pack 2. This package provides new and improved network acceleration and compatibility with hardware-based offload technologies. Three technologies included in the Scalable Networking Pack help optimize server performance when processing network traffic. Because iSCSI uses the network, it can take advantage of these technologies. These technologies are Receive-side Scaling, TCP Offload, and NetDMA. NetDMA was not tested for this report.


    Receive-side Scaling


    Receive-side scaling is especially important in multi-core and multi-processor systems because of the architecture of the NDIS 5.1 miniport drivers. Without the SNP and Receive-side Scaling, multi-processor and multi-core Windows 2003 Server systems route all incoming network traffic interrupts to exactly one processor core, resulting in limited scalability, regardless of the number of processors or processor cores in the system. With SNP and Receive-side Scaling and the NDIS 5.2 miniport driver, incoming network traffic interrupts are distributed among the processors and processor cores on the computer. Receive-side Scaling-capable network adapters are now available, and are required to take advantage of this feature. Support for this feature is currently found in some, but not all server-class network adapters.

    The Scalable Networking Pack monitors network adapters for Receive-side Scaling capabilities. If a network adapter supports Receive-side Scaling, the Scalable Networking Pack uses this capability across all TCP connections, including connections that are offloaded through TCP Offload.


    TCP Offload Adapters


    TCP Chimney is the Microsoft Windows Server term for offloading the TCP protocol stack into network interface adapters. Network adapters that support this feature are also known as TCP/IP Offload Engines (TOE). TCP Chimney is an operating system interface to advanced Ethernet network adapters that can completely manage TCP data transfer, including acknowledgement processing and TCP segmentation and reassembly.

    Full iSCSI Host Bus Adapters (HBA)


    Another approach to use for offloading CPU processing cycles is to combine the iSCSI initiator and the full TCP processing onto one adapter card and perform all these functions in hardware. This work performed for this report used the Microsoft iSCSI software initiator for all examples, so iSCSI HBAs were not used, but many models are supported on Windows Servers.

    Performance Result Summary by Initiator Network Adapter Type


    Although this report is not a full performance benchmark, several performance measurements were taken using various network adapters with the same I/O workloads.

    Ethernet network adapters are one important component of an iSCSI storage solution. It should be noted that best practices recommend that a true server-class network adapter should be used for iSCSI storage applications. The low-cost network adapter listed below that was used in these tests is not a true server-class network adapter, but was used only as a point of reference. This truly shows the importance of server-class network adapters in iSCSI deployments.

    Three different types of Gigabit Ethernet network adapters were used for these tests. Two low-cost network adapters were deployed, each with one port. The advanced network adapter and the TCP Offload adapter are dual-port, server-class network adapters. The low-cost network adapters used in this report are available for the least cost, but are not recommended for iSCSI storage solutions. The advanced network adapters are available for a mid-range price. The TCP Offload adapters are, by comparison, more expensive. The three types of network adapters used in the Demartek lab were:


    • Low-cost network adapter: NetGear® GA-311

    • Advanced network adapter supporting Receive-side Scaling: Intel® Pro/1000 PT

    • TCP Offload network adapter: Alacritech® SEN2002XT

    The CPU usage on the dual-core, single processor server using the low-cost network adapter, without the Scalable Networking Pack was significantly skewed toward the first core with very little activity on the second core, especially during read operations. When SNP was installed and the advanced network adapter and the TCP Offload adapter were each used, the dual-core processor server exhibited a lower and more evenly balanced CPU utilization. The following Task Manager snapshots highlight the differences for light to moderate workloads using a mid-range iSCSI target solution.

    Dual-core server with


    low-cost network adapter

    Dual-core server with


    SNP and Receive-side Scaling
    server-class network adapter

    Dual-core server with


    SNP and TCP Offload
    server-class network adapter

    The differences between server-class network adapters and low-cost network adapters became obvious during our tests. We found that under heavy workloads with a high-performance iSCSI target solution, the low-cost network adapter configuration became unacceptably slow, fully utilizing the processor and locking out other processes on the server including mouse and keyboard controls. The same workloads using the advanced network adapter and the TCP Offload adapter, which are server-class network adapters, completed the workloads in the expected time and did not lock out other processes.

    The charts below show a representative sample of percentage of CPU utilization for the three types of network adapters. Two paths, using MPIO, were used for this sample.

    Network adapter legend:



    • NIC-LOW: low cost network adapter

    • NIC-SVR: advanced server-class network adapter supporting Receive-side Scaling

    • NIC-TOE: TCP Offload adapter supporting TCP/IP Offload Engine

    Performance will vary depending on many factors, including number of processors and processor cores in the application server, amount of memory in the application server, network adapter type, specific network adapter features that are enabled, and the iSCSI target storage system characteristics.
    1   2   3   4   5   6   7   8   9   ...   15


    Download 223.98 Kb.

    Bosh sahifa
    Aloqalar

        Bosh sahifa


    Deploying iscsi storage Solutions on the Microsoft® Windows Server™ Platform

    Download 223.98 Kb.