To configure multi-path iSCSI I/O for the initiator that uses the HP All-in-One 1200 iSCSI targets, follow the directions for Microsoft Multi-path I/O from the Deployment section of this document above.
Basic Performance Results
The following performance data is not intended to be viewed as a comprehensive performance benchmark, but to provide a general performance overview for the HP StorageWorks All-in-One1200 Storage System.
Selected performance results are shown below, using a standard server-class network adapter, with receive-side scaling on the host. This configuration used two paths from one host, two I/O workers, simultaneously accessing two target volumes and a queue depth of 10. Each target volume used a dedicated path, with no load-balancing across the paths.
LeftHand Networks® SAN/iQ®
The LeftHand Networks SAN/iQ storage system is an iSCSI target solution that includes three (3) HP ProLiant DL320s systems and the LeftHand Networks SAN/iQ software. It includes 10K or 15K RPM SAS disk drives totaling up to 3.6 TB of raw capacity per module, configured as RAID 10. In this case, the three ProLiant DL320s servers are clustered together to create a “virtual storage array” consisting of 5.4 TB of total usable storage. In addition to RAID10 at the disk level, LeftHand offers “network RAID” to provide an additional layer of protection for individual LUNs which guards against network or any other hardware failure.
SAN/iQ includes many additional management features as part of the basic SAN offering. These features include snapshots, volume branching, thin provisioning, offsite DR snapshots, iSCSI load balancing, block-level load balancing, and automated capacity management. Customers can expand the storage cluster at any time by adding additional units to the cluster. This not only increases capacity, but SAN/iQ automatically re-load balances all the existing LUNs across the new configuration, increasing the performance of the SAN as well.
Another capability is LeftHand’s “multi-site SAN” in which customers physically locate half their cluster in one location and the other half at a different location, such as another building or floor. In this case, the SAN is now “fault-tolerant” in that a location disaster will not interrupt service. This capability is included in the base offering, and does not require any additional administration to set up and manage.
Target Configuration Steps
Configure Network Settings for iSCSI Target Device
To install the LeftHand Networks SAN/iQ from factory settings, a computer must be connected via the supplied serial cable to the LeftHand NSM. The first Ethernet port must be given an IP address. Later, a virtual IP address will be assigned using the management console that will be the address that the clients use to access the target volumes and to manage the storage cluster.
Launch Management Console
Launching the Management Console begins the discovery process and displays the NSMs. The three basic steps to configure the system including creation and assignment of all the targets are listed on the main screen, each driven by a wizard.
The LeftHand Networks SAN/iQ solution uses the concept of management groups to organize its storage clusters. The “Management Group, Clusters and Volumes” wizard steps the administrator through the initial management configuration and creation of the first volume.
The wizard asks few questions to complete the initial management configuration, including the name of the management group, virtual IP address to be used for the cluster and the first volume information.
The storage cluster is given a virtual IP address, that will be used for all access to the volumes assigned to the cluster. The LeftHand Networks solution presents the virtual IP address to the clients and manages all fail-over and load balancing functions behind this virtual address.
After the wizard has the virtual IP address, it prompts for the information to create the first volume, including volume name, replication features and capacity. In this case, 2-way replication is selected, which tells SAN/iQ to provide an additional layer of data protection for this volume.
This wizard can be repeated to created additional volumes. Note that the “Access Volume Wizard” can be run to complete all the remaining target management steps.
Create LUNs on Disk Array
The LUNs are created using the “Management Group, Clusters and Volumes” wizard as described above in step 2.
Make LUNs Ready for Use
To make the volumes ready to use, they must be assigned to a host and appropriate security applied. The “Access Volume Wizard” is run to complete this process. This step can be run directly at the conclusion of the previous wizard from step 2 above.
A volume list provides the association between the volume and a host. After the first volume has been associated with a host, other volumes can be added to the volume list, and these volumes are automatically associated with the same host.
The authentication group provides the information about the hosts that will access the volumes in this volume list. In this case, the name of the host is used as the name of the authentication group.
The iSCSI initiator name of the host is provided, along with any desired load balancing.
This wizard can be repeated for additional volume lists or host information.
Create iSCSI Targets
The targets are created in step 2 above.
Create Multi-path I/O for iSCSI Targets (optional)
Multi-path I/O is automatically performed by the storage cluster on the target side. Initiator side MPIO is configured below in the Initiator Configuration Steps.
Configure Security for iSCSI Targets (optional)
Additional security, such as CHAP, can be configured by editing the authentication group for the volume.
Make iSCSI Targets Ready for Use for iSCSI Initiators
No additional steps are needed to make the targets ready for use.
Initiator Configuration Steps
Configure Multi-path I/O from Application Host
MPIO for the initiator can be enabled by running the SAN/iQ Solution Pack from the application host.
The SAN/iQ DSM for MPIO is selected, which begins the installation for the MPIO DSM.
When the iSCSI initiator is launched to logon to the LeftHand Networks targets, the default addresses are selected, and MPIO is enabled for all the paths and targets. The “Advanced” tab in the initiator logon process is not needed.
Basic Performance Results
The following performance data is not intended to be viewed as a comprehensive performance benchmark, but to provide a general performance overview for the LeftHand Networks SAN/iQ solution.
Selected performance results are shown below, using a standard server-class network adapter, without receive-side scaling on the host. This configuration used two paths from two hosts, two I/O workers (one from each host), simultaneously accessing two target volumes and a queue depth of 20. Each host accessed its target volume with a pair of NICs configured as one “teaming” NIC. The target was configured as “non-mirrored.”
Storage Management Notes
Efficient Storage Management
Storage Manager for SANs
Storage Manager for SANs (SMfS) is a Microsoft Management Console snap-in that administrators can use to create and manage the logical units (LUNs) that are used to allocate space on storage arrays in both Fibre Channel and iSCSI environments. Administered through a conventional snap-in, Storage Manager for SANs can be used on storage area network (SAN) based storage arrays that support Virtual Disk Server (VDS) using a hardware VDS provider. Because of hardware, protocol, transport layer and security differences, configuration and LUN management differ for the two types (iSCSI and Fibre Channel) of supported environments. This feature will work with any type of Host Bus Adapter (HBA) or switches on the SAN. A list of VDS providers that have passed the Hardware Compatibility Tests (HCT) is available on http://www.microsoft.com/storage.
LUN management for Fibre Channel subsystems
On a Fibre Channel storage subsystem, LUNs are assigned directly to a server, which accesses the LUN through one or more Host Bus Adapter (HBA) ports. The administrator needs only to identify the server that will access the LUN, and enable one or more HBA ports on the server to be used for LUN I/O traffic. When the server is assigned to a LUN, the server can immediately access the LUN to create, augment, delete, and mask (or unmask) the LUN.
Support for multiple I/O paths. If a server supports Microsoft Multi-path I/O (MPIO), Storage Manager for SANs can provide path failover by enabling multiple ports on the server for LUN I/O traffic. To prevent data loss in a Fibre Channel environment, make sure that the server supports MPIO before enabling multiple ports. (On an iSCSI subsystem, this is not needed: the Microsoft iSCSI initiator (version 2.0) that is installed on the server supports MPIO.)
LUN management for iSCSI subsystems
Unlike on a Fibre Channel storage subsystem, LUNs on an iSCSI subsystem are not directly assigned to a server. For iSCSI, a LUN is assigned to a target – a logical entity that contains one or more LUNs. A server accesses the LUN by logging on to the target using the server’s iSCSI initiator. To log on to a target, the initiator connects to portals on the target; a subsystem has one or more portals, which are associated with targets. If a server’s initiator is logged on to a target, and a new LUN is assigned to the target, the server can immediately access the LUN.
Securing data on an iSCSI SAN. To help secure data transfers between the server and the subsystem, configure security for the login sessions between initiators and targets. Using Storage Manager for SANs, you can configure one-way or mutual Challenge Handshake Authentication Protocol (CHAP) authentication between the initiator and targets, and you can also configure Internet Protocol security (IPSec) data encryption.
Internet SCSI (iSCSI) can be a useful and relatively inexpensive way to provide storage for new applications or to provide a networked pool of storage for existing applications. Microsoft and its storage partners provide a variety of storage solutions that can be implemented relatively easily. This report allows administrators and IT managers to explore iSCSI technology and see actual deployment examples.
There is no question that iSCSI storage solutions and technology have a place in many IT environments. The performance of iSCSI storage solutions is adequate for many applications and iSCSI technology provides the benefits of storage area network technology for a lower cost than Fibre Channel storage solutions.
For more information on storage for Windows Server Storage and iSCSI in particular, see the following:
Microsoft Storage at http://www.microsoft.com/storage/
Microsoft iSCSI Storage at http://www.microsoft.com/WindowsServer2003/technologies/storage/iscsi/default.mspx
Microsoft Windows Storage Server at http://www.microsoft.com/windowsserversystem/wss2003/default.mspx
Microsoft Windows Unified Data Storage Server 2003 at http://www.microsoft.com/windowsserversystem/storage/wudss.mspx
Microsoft Storage Technical Articles and White Papers at http://www.microsoft.com/windowsserversystem/storage/indextecharticle.mspx
Microsoft Scalable Networking Pack at http://www.microsoft.com/technet/network/snp/default.mspx
Microsoft Exchange Solution Reviewed Program – Storage at http://technet.microsoft.com/en-us/exchange/bb412164.aspx
Microsoft Cluster Server at http://www.microsoft.com/windowsserver2003/technologies/clustering/default.mspx
For more information on the Microsoft storage partner products mentioned in this report, see the following:
Dell PowerVault NX1950 Networked Storage Solution at http://www.dell.com/content/products/productdetails.aspx/pvaul_nx1950?c=us&cs=555&l=en&s=biz
EqualLogic PS3800XV at http://www.equallogic.com/products/view.aspx?id=1989
HDS TagmaStore AMS1000 at http://www.hds.com/products_services/adaptable_modular_storage/
HP StorageWorks 1200 All-in-One Storage System at http://www.hp.com/go/AiOStorage
LeftHand Networks SAN/iQ at http://www.lefthandnetworks.com/products/nsm.php
For more information on RFC documents, see the following:
RFC1334: CHAP and PAP at http://rfc.net/rfc1334.html