Security for iSCSI includes some security features in the iSCSI layer itself, separate from any security layers that may be present in the lower TCP, IP, and Ethernet layers. The iSCSI security features can be enabled or disabled, as desired.
Each environment will need to address the issue of running storage traffic over the same network as the “public” LAN. Many will address this by running iSCSI storage traffic over a separate network or VLAN, which is the recommended best practice from Microsoft for applications using iSCSI storage. The items listed below are features of iSCSI which can provide increased security even if the iSCSI traffic is on a separate network.
The Microsoft iSCSI initiator uses Challenge Handshake Authentication Protocol (CHAP) to verify the identity of iSCSI host systems that are attempting to access storage targets. Using CHAP, the iSCSI initiator and iSCSI target share a predefined secret. The initiator combines the secret with other information into a value and calculates a one-way hash using MD5. The hash value is transmitted to the target. The target computes a one-way hash of its shared secret and other information. If the hash values match, the initiator is authenticated. The other information includes an ID value that is increased with each CHAP dialog to protect against replay attacks. Mutual CHAP is supported.
CHAP is generally regarded as more secure than PAP. More information is available on CHAP and PAP in RFC1334.
IPSec is also available for iSCSI. If IPSec is enabled, all IP packets sent during data transfers are encrypted and authenticated. A common key is set on all IP portals, allowing all peers to authenticate each other and negotiate packet encryption.
The Microsoft iSCSI initiator can be configured with the CHAP secret by clicking the “Secret” button from the “General” tab of the iSCSI initiator.
Microsoft Application Deployments for iSCSI
Microsoft Cluster Server
Microsoft Cluster Server (MSCS) supports the use of iSCSI connected disks as shared storage. Before creating the cluster, the storage that will be shared among the cluster nodes, including the quorum disk must be available. The volumes that will be shared by the cluster must be created on the iSCSI target and made available to the first node (iSCSI initiator) of the cluster. After the cluster is created, the iSCSI target should be configured to allow the other nodes of the cluster to access the same volumes as the first node.
Pre-Cluster Network Preparation
Microsoft clusters require at least one network interface configured with a static IP address on each node of the cluster for cluster communication. The cluster nodes also need at least one separate network interface to communicate to the clients on the LAN. Consult the Related Links section of this document for additional information on Microsoft Clusters.
Target Pre-Cluster Preparation Tasks
The volumes are created and associated with the iSCSI target that is mapped to the first node of the cluster. In the following example, the HP StorageWorks 1200 All-in-One was used for the iSCSI target storage for the cluster, which uses virtual disks for the volumes.
Create the virtual disk for the volume that will be used as the quorum disk.
Other virtual disk volumes are created for applications that will use the cluster.
After the volumes are created, the second node is added to the iSCSI initiator list for the target.
Because cluster nodes must be members of a domain, the full domain identifiers are added to the list of iSCSI initiators.
The Cluster Administrator has the first node of the cluster.
The next step is to add another node to the cluster.
The second node is “dmrtk-srvr-b2”.
The wizard performs its initial analysis of the second node.
The Cluster Administrator now has the second node.
The cluster groups are displayed.
The groups can be assigned to the other node, as needed. Groups 3 and 4 are moved to the other node, as in a fail-over scenario. This storage is now visible to the second node, and no longer visible to the first node. The Disk Manager view below is from the first node after moving the resources to the second node. Notice that disks 7 and 8 are not accessible to the first node.
The cluster is now ready for applications and data.