• Estimating the Amount of Data to Be Stored
  • Choosing a Storage Array
  • Hardware RAID Levels
  • Choosing the RAID Level
  • Selecting a Stripe Unit Size
  • Determining the Volume Layout
  • Performance Tuning Guidelines for Windows Server 2008 R2 October 15, 2010




    Download 0.66 Mb.
    bet7/19
    Sana26.12.2019
    Hajmi0.66 Mb.
    #5293
    1   2   3   4   5   6   7   8   9   10   ...   19

    Choosing Storage


    The most important considerations in choosing storage systems include the following:

    • Understanding the characteristics of current and future storage workloads.

    • Understanding that application behavior is essential for both storage subsystem planning and performance analysis.

    • Providing necessary storage space, bandwidth, and latency characteristics for current and future needs.

    • Selecting a data layout scheme (such as striping), redundancy architecture (such as mirroring), and backup strategy.

    • Using a procedure that provides the required performance and data recovery capabilities.

    • Using power guidelines – That is, calculating the expected average power required in total and per-unit volume (such as watts per rack).

    When compared to 3.5-inch disks, 2.5-inch disks have greatly reduced power requirements, but they can also be packed more compactly into racks or servers, which can increase cooling requirements per rack or per server chassis. Note that enterprise disk drives are currently not built to withstand frequent power-up/power-down cycles. Attempts to save energy consumption by shutting down a server’s internal or external storage should be carefully weighed against possible increases in lab operation costs or decreases in system data availability caused by a higher rate of disk failures. This issue might be alleviated in future enterprise disk designs or through the use of solid-state storage (for example, SSDs).
    The better you understand the workloads on a specific server or set of servers, the more accurately you can plan. The following are some important workload characteristics:

    • Read:write ratio

    • Sequential vs. random access, including both temporal and spatial locality

    • Request sizes

    • Request concurrency, interarrival rates, and burstiness (that is, patterns of request arrival rates)

    Estimating the Amount of Data to Be Stored


    When you estimate how much data will be stored on a new server, consider these issues:

    • How much data you will move to the new server from existing servers.

    • How much replicated data you will store on the new file server if the server is a file server replica member.

    • How much data you will store on the server in the future.

    A general guideline is to assume that growth will be faster in the future than it was in the past. Investigate whether your organization plans to hire many employees, whether any groups in your organization are planning large projects that will require additional storage, and so on.

    You must also consider how much space is used by operating system files, applications, RAID redundancy, log files, and other factors. Table 7 describes some factors that affect server storage capacity.

    Table 7. Factors That Affect Server Storage Capacity



    Factor

    Required storage capacity

    Operating system files

    At least 15 GB.

    To provide space for optional components, future service packs, and other items, plan for an additional 3 to 5 GB for the operating system volume. Windows installation can require even more space for temporary files.



    Paging file

    For smaller servers, 1.5 times the amount of RAM, by default.

    For servers that have hundreds of gigabytes of memory, you might be able to eliminate the paging file; otherwise, the paging file might be limited because of space constraints (available disk capacity). The benefit of a paging file of larger than 50 GB is unclear.



    Memory dump

    Depending on the memory dump file option that you have chosen, as large as the amount of physical memory plus 1 MB.

    On servers that have very large amounts of memory, full memory dumps become intractable because of the time that is required to create, transfer, and analyze the dump file.



    Applications

    Varies according to the application.
    Example applications include backup and disk quota software, database applications, and optional components such as Recovery Console, Services for UNIX, and Services for NetWare.

    Log files

    Varies according to the applications that create the log file.
    Some applications let you configure a maximum log file size. You must make sure that you have enough free space to store the log files.

    Data layout and redundancy

    Varies depending on cost, performance, reliability, availability, and power goals.
    For more information, see “Choosing the Raid Level” later in this guide.

    Shadow copies

    10 percent of the volume, by default, but we recommend increasing this size based on frequency of snapshots and rate of disk data updates.

    Choosing a Storage Array


    There are many considerations in choosing a storage array and adapters. The choices include the type of storage communication protocols that you use, including the options shown in Table 8.

    Table 8. Options for Storage Array Selection



    Option

    Description

    Fibre

    Channel or SCSI



    Fibre Channel enables long glass or copper cables to connect the storage array to the system and provides high bandwidth. SCSI provides high bandwidth, but it has cable length restrictions.

    SAS or SATA

    These serial protocols improve performance, reduce cable length limitations, and reduce cost. SAS and SATA drives are replacing much of the SCSI market. In general, SATA drives are built with higher capacity and lower cost targets than SAS drives; the premium associated with SAS is typically attributed to performance.

    Hardware RAID capabilities

    For maximum performance and reliability, the enterprise storage controllers should offer RAID capabilities. RAID levels 0, 1, 0+1, 5, and 6 are described in Table 9.

    Maximum storage capacity

    Total usable storage space.

    Storage bandwidth

    The maximum peak and sustained bandwidths at which storage can be accessed are determined by the number of physical disks in the array, the speed of the controllers, the type of bus protocol (such as SAS or SATA), the hardware-managed or software-managed RAID, and the adapters that are used to connect the storage array to the system. Of course, the more important values are the achievable bandwidths for the specific workloads to be executed on servers that access the storage.

    Hardware RAID Levels


    Most storage arrays provide some hardware RAID capabilities. Table 9 lists the common RAID levels.

    Table 9. RAID Options



    Option

    Description

    Just a bunch of disks (JBOD)

    This is not a RAID level, but instead is the baseline against which to measure RAID performance, reliability, availability, cost, capacity, and energy consumption. Individual disks are referenced separately, not as a combined entity.

    In some scenarios, JBOD actually provides better performance than striped data layout schemes. For example, when serving multiple lengthy sequential streams, performance is best when a single disk services each stream. Also, workloads that are composed of small, random requests do not experience performance improvements when they are moved from JBOD to a striped data layout.

    JBOD is susceptible to static and dynamic “hot spots” (frequently accessed ranges of disk blocks) that reduce available storage bandwidth due to the resulting load imbalance between the physical drives.

    Any physical disk failure results in data loss in a JBOD configuration. However, the loss is limited to the failed drives. In some scenarios, JBOD provides a level of data isolation that can be interpreted as actually offering greater reliability than striped configurations.



    Spanning

    This is also not a RAID level, but instead is the simple concatenation of multiple physical disks into a single logical disk. Each disk contains one continuous set of sequential logical blocks. Spanning has the same performance and reliability characteristics as JBOD.

    RAID 0 (striping)

    RAID 0 is a data layout scheme in which sequential logical blocks of a specified size (the stripe unit) are laid out in a round-robin manner across multiple disks. It presents a combined logical disk that stripes disk accesses over a set of physical disks.

    For most workloads, a striped data layout provides better performance than JBOD if the stripe unit is appropriately selected based on server workload and storage hardware characteristics. The overall storage load is balanced across all physical drives.

    This is the least expensive RAID configuration because all of the disk capacity is available for storing the single copy of data.

    Because no capacity is allocated for redundant data, RAID 0 does not provide data recovery mechanisms such as those provided in the other RAID schemes. Also, the loss of any disk results in data loss on a larger scale than JBOD because the entire file system or raw volume spread across n physical disks is disrupted; every nth block of data in the file system is missing.



    RAID 1 (mirroring)

    RAID 1 is a data layout scheme in which each logical block exists on multiple physical disks (typically two). It presents a logical disk that consists of a set of two or more mirrored disks.

    RAID 1 often has worse bandwidth and latency for write operations when compared to RAID 0 (or JBOD). This is because data from each write request must be written to two or more physical disks. Request latency is based on the slowest of the two (or more) write operations that are necessary to update all copies of the updated data blocks.

    RAID 1 can provide faster read operations than RAID 0 because it can read from the least busy physical disk from the mirrored pair, or the disk that will experience the shortest mechanical positioning delays.

    RAID 1 is the most expensive RAID scheme in terms of physical disks because half (or more) of the disk capacity stores redundant data copies. RAID 1 can survive the loss of any single physical disk. In larger configurations it can survive multiple disk failures, if the failures do not involve all the disks of a specific mirrored disk set.

    RAID 1 has greater power requirements than a non-mirrored storage configuration. RAID 1 doubles the number of disks and therefore doubles the required amount of idle power. Also, RAID 1 performs duplicate write operations that require twice the power of non-mirrored write operations.

    RAID 1 is the fastest ordinary RAID level for recovery time after a physical disk failure. Only a single disk (the other part of the broken mirror pair) brings up the replacement drive. Note that the second disk is typically still available to service data requests throughout the rebuilding process.



    RAID 0+1 (striped mirrors)

    The combination of striping and mirroring is intended to provide the performance benefits of RAID 0 and the redundancy benefits of RAID 1.

    This option is also known as RAID 1+0 and RAID 10.

    Cost and power characteristics are similar to those of RAID 1.


    RAID 5 (rotated parity)

    RAID 5 presents a logical disk composed of multiple physical disks that have data striped across the disks in sequential blocks (stripe units) in a manner similar to RAID 0. However, the underlying physical disks have parity information scattered throughout the disk array, as Figure 4 shows.

    For read requests, RAID 5 has characteristics that resemble those of RAID 0. However, small RAID 5 writes are much slower than those of JBOD or RAID 0 because each parity block that corresponds to the modified data block must also be updated. This process requires three additional disk requests. Because four physical disk requests are generated for every logical write, bandwidth is reduced by approximately 75 percent.

    RAID 5 provides data recovery capabilities because data can be reconstructed from the parity. RAID 5 can survive the loss of any one physical disk, as opposed to RAID 1, which can survive the loss of multiple disks as long as an entire mirrored set is not lost.

    RAID 5 requires additional time to recover from a lost physical disk compared to RAID 1 because the data and parity from the failed disk can be re-created only by reading all the other disks in their entirety. Performance during the rebuilding period is severely reduced due not only to the rebuilding traffic but also because the reads and writes that target the data that was stored on the failed disk must read all disks (an entire “stripe”) to re-create the missing data.

    RAID 5 is less expensive than RAID 1 because it requires only an additional single disk per array, instead of double (or more) the total number of disks in an array.

    Power guidelines: RAID 5 might consume more or less energy than a mirrored configuration, depending on the number of drives in the array, the characteristics of the drives, and the characteristics of the workload. RAID 5 might use less energy if it uses significantly fewer drives. The additional disk adds to the required amount of idle power as compared to a JBOD array, but it requires less additional idle power versus a full mirrored set of drives. However, RAID 5 requires four accesses for every random write request in order to read the old data, read the old parity, compute the new parity, write the new data, and write the new parity. This means that the power needed beyond idle to perform the write operations is up to 4X that of JBOD or 2X that of a mirrored configuration. (Note that depending on the workload, there may be only two seeks, not four, that require moving the disk actuator.) Thus, though unlikely in most configurations, RAID 5 might have greater energy consumption. This might happen in the case of a heavy workload being serviced by a small array or an array of disks whose idle power is significantly lower than their active power.



    RAID 6 (double-rotated redundancy)

    Traditional RAID 6 is basically RAID 5 with additional redundancy built in. Instead of a single block of parity per stripe of data, two blocks of redundancy are included. The second block uses a different redundancy code (instead of parity), which enables data to be reconstructed after the loss of any two disks.

    Power guidelines: RAID 6 might consume more or less energy than a mirrored configuration, depending on the number of drives in the array, the characteristics of the drives, and the characteristics of the workload. RAID 6 might use less energy if it uses significantly fewer drives. The additional disk adds to the required amount of idle power as compared to a JBOD array, but it requires less additional idle power versus a full mirrored set of drives. However, RAID 6 requires six accesses for every random write request in order to read the old data, read the old redundancy data (two sets), compute the new redundancy data (two sets), write the new data, and write the new redundancy data (two sets). This means that the power needed beyond idle to perform the writes is up to 6X that of JBOD or 3X that of a mirrored configuration. (Note that depending on the workload, there may be only three seeks, not six, that require moving the disk actuator.) Thus, though unlikely in most configurations, RAID 6 might have greater energy consumption. This might happen in the case of a heavy workload being serviced by a small array or an array of disks whose required idle power is significantly lower than their active power.

    There are some hardware-managed arrays that use the term RAID 6 for other schemes that attempt to improve the performance and reliability of RAID 5. For example, the data disks can be arranged in a two-dimensional matrix, with both vertical and horizontal parity being maintained, but that scheme requires even more drives. This document uses the traditional definition of RAID 6.

    Rotated redundancy schemes (such as RAID 5 and RAID 6) are the most difficult to understand and plan for. Figure 4 shows a RAID 5 example, where the sequence of logical blocks presented to the host is A0, B0, C0, D0, A1, B1, C1, E1, and so on.





    Figure 4. RAID 5 Overview

    Choosing the RAID Level


    Each RAID level involves a trade-off between the following factors:

    • Performance

    • Reliability

    • Availability

    • Cost

    • Capacity

    • Power

    To determine the best RAID level for your servers, evaluate the read and write loads of all data types and then decide how much you can spend to achieve the performance and availability/reliability that your organization requires. Table 10 describes common RAID levels and their relative performance, reliability, availability, cost, capacity, and energy consumption.



    Table 10. RAID Trade-Offs

    Configuration

    Performance

    Reliability

    Availability

    Cost, capacity, and power

    JBOD

    Pros:

    • Concurrent sequential streams to separate disks.


    Cons:

    • Susceptibility to load imbalance.

    Pros:

    • Data isolation; single loss affects one disk.


    Cons:

    • Data loss after one failure.

    Pros:

    • Single loss does not prevent access to other disks.

    Pros:

    • Minimum cost.

    • Minimum power.

    RAID 0 (striping)
    Requirements:

    • Two-disk minimum.




    Pros:

    • Balanced load.

    • Potential for better response times, throughput, and concurrency.


    Cons:

    • Difficult stripe unit size choice.


    Cons:

    • Data loss after one failure.

    • Single loss affects the entire array.


    Cons:

    • Single loss prevents access to entire array.

    Pros:

    • Minimum cost.

    • Minimum power.


    RAID 1 (mirroring)
    Requirements:

    • Two-disk minimum.




    Pros:

    • Two data sources for every read request (up to 100% performance improvement).


    Cons:

    • Writes must update all mirrors.

    Pros:

    • Single loss and often multiple losses (in large configurations) are survivable.

    Pros:

    • Single loss and often multiple losses (in large configurations) do not prevent access.

    Cons:

    • Twice the cost of RAID 0 or JBOD.

    • Up to 2X power .

    RAID 0+1 (striped mirrors)
    Requirements:

    • Four-disk minimum.




    Pros:

    • Two data sources for every read request (up to 100% performance improvement).

    • Balanced load.

    • Potential for better response times, throughput, and concurrency.


    Cons:

    • Writes must update mirrors.

    • Difficult stripe unit size choice.

    Pros:

    • Single loss and often multiple losses (in large configurations) are survivable.

    Pros:

    • Single loss and often multiple losses (in large configurations) do not prevent access.


    Cons:

    • Twice the cost of RAID 0 or JBOD.

    • Up to 2X power .

    RAID 5 (rotated parity)
    Requirements:

    • One additional disk required.

    • Three-disk minimum.




    Pros:

    • Balanced load.

    • Potential for better read response times, throughput, and concurrency.


    Cons:

    • Up to 75% write performance reduction because of Read-Modify-Write.

    • Decreased read performance in failure mode.

    • All sectors must be read for reconstruction; major slowdown.

    • Danger of data in invalid state after power loss and recovery.

    Pros:

    • Single loss survivable; “in-flight” write requests might still become corrupted.



    Cons:

    • Multiple losses affect entire array.

    • After a single loss, array is vulnerable until reconstructed.

    Pros:

    • Single loss does not prevent access.


    Cons:

    • Multiple losses prevent access to entire array.

    • To speed reconstruction, application access might be slowed or stopped.

    Pros:

    • Only one more disk to power.



    Cons:

    • Up to 4X the power for write requests (excluding the idle power).




    RAID 6 (two separate erasure codes)
    Requirements:

    • Two additional disks required.

    • Five-disk minimum.




    Pros:

    • Balanced load.

    • Potential for better read response times, throughput, and concurrency.


    Cons:

    • Up to 83% write performance reduction because of multiple RMW.

    • Decreased read performance in failure mode.

    • All sectors must be read for reconstruction: major slowdown.

    • Danger of data in invalid state after power loss and recovery.

    Pros:

    • Single loss survivable; “in-flight” write requests might still be corrupted.



    Cons:

    • More than two losses affect entire array.

    • After two losses, an array is vulnerable until reconstructed.

    Pros:

    • Single loss does not prevent access.

    Cons:

    • More than two losses prevent access to entire array.

    • To speed reconstruction, application access might be slowed or stopped.

    Pros:

    • Only two more disks to power.


    Cons:

    • Up to 6X the power for write requests (excluding the idle power).



    The following are sample uses for various RAID levels:



    • JBOD: Concurrent video streaming.

    • RAID 0: Temporary or reconstructable data, workloads that can develop hot spots in the data, and workloads with high degrees of unrelated concurrency.

    • RAID 1: Database logs, critical data, and concurrent sequential streams.

    • RAID 0+1: A general purpose combination of performance and reliability for critical data, workloads with hot spots, and high-concurrency workloads.

    • RAID 5: Web pages, semicritical data, workloads without small writes, scenarios in which capital and operating costs are an overriding factor, and read-dominated workloads.

    • RAID 6: Data mining, critical data (assuming quick replacement or hot spares), workloads without small writes, scenarios in which cost or power is a major factor, and read-dominated workloads. RAID 6 might also be appropriate for massive datasets, where the cost of mirroring is high and double-disk failure is a real concern (due to the time required to complete an array parity rebuild for disk drives greater than 1 TB).

    If you use more than two disks, RAID 0+1 is usually a better solution than RAID 1.

    To determine the number of physical disks that you should include in RAID 0, RAID 5, and RAID 0+1 virtual disks, consider the following information:


    • Bandwidth (and often response time) improves as you add disks.

    • Reliability, in terms of mean time to failure for the array, decreases as you add disks.

    • Usable storage capacity increases as you add disks, but so does cost.

    • For striped arrays, the trade-off is in data isolation (small arrays) and better load balancing (large arrays). For RAID 1 arrays, the trade-off is in better cost/capacity (mirrors—that is, a depth of two) and the ability to withstand multiple disk failures (shadows—that is, depths of three or even four). Read and write performance issues can also affect RAID 1 array size. For RAID 5 arrays, the trade-off is better data isolation and mean time between failures (MTBF) for small arrays and better cost/capacity/power for large arrays.

    • Because hard disk failures are not independent, array sizes must be limited when the array is made up of actual physical disks (that is, a bottom-tier array). The exact amount of this limit is very difficult to determine.

    The following is the array size guideline with no available hardware reliability data:



    • Bottom-tier RAID 5 arrays should not extend beyond a single desk-side storage tower or a single row in a rack-mount configuration. This means approximately 8 to 14 physical disks for modern 3.5-inch storage enclosures. Smaller 2.5-inch disks can be racked more densely and therefore might require dividing into multiple arrays per enclosure.

    • Bottom-tier mirrored arrays should not extend beyond two towers or rack-mount rows, with data being mirrored between towers or rows when possible. These guidelines help avoid or reduce the decrease in time between catastrophic failures that is caused by using multiple buses, power supplies, and so on from separate storage enclosures.

    Selecting a Stripe Unit Size


    The Windows volume manager stripe unit is fixed at 64 KB. Hardware solutions can range from 4 KB to 1 MB or more. Ideal stripe unit size maximizes the disk activity without unnecessarily breaking up requests by requiring multiple disks to service a single request. For example, consider the following:

    • One long stream of sequential requests on JBOD uses only one disk at a time. To keep all striped disks in use for such a workload, the stripe unit should be at least 1/n where n is the request size.

    • For n streams of small serialized random requests, if n is significantly greater than the number of disks and if there are no hot spots, striping does not increase performance over JBOD. However, if hot spots exist, the stripe unit size must maximize the possibility that a request will not be split while it minimizes the possibility of a hot spot falling entirely within one or two stripe units. You might choose a low multiple of the typical request size, such as 5X or 10X, especially if the requests are on some boundary (for example, 4 KB or 8 KB).

    • If requests are large and the average (or perhaps peak) number of outstanding requests is smaller than the number of disks, you might need to split some requests across disks so that all disks are being used. You can interpolate an appropriate stripe unit size from the previous two examples. For example, if you have 10 disks and 5 streams of requests, split each request in half (that is, use a stripe unit size equal to half the request size).

    • Optimal stripe unit size increases with concurrency, burstiness, and typical request sizes.

    • Optimal stripe unit size decreases with sequentiality and with good alignment between data boundaries and stripe unit boundaries.

    Determining the Volume Layout


    Placing individual workloads into separate volumes has advantages. For example, you can use one volume for the operating system or paging space and one or more volumes for shared user data, applications, and log files. The benefits include fault isolation, easier capacity planning, and easier performance analysis.

    You can place different types of workloads into separate volumes on different physical disks. Using separate disks is especially important for any workload that creates heavy sequential loads such as log files, where a single set of physical disks (that compose the logical disk exposed to the operating system by the array controller) can be dedicated to handling the disk I/O that the updates to the log files create. Placing the paging file on a separate virtual disk might provide some improvements in performance during periods of high paging.

    There is also an advantage to combining workloads on the same physical disks, if the disks do not experience high activity over the same time period. This is basically the partnering of hot data with cold data on the same physical drives.

    The “first” partition on a volume usually uses the outermost tracks of the underlying disks and therefore provides better performance.




    Download 0.66 Mb.
    1   2   3   4   5   6   7   8   9   10   ...   19




    Download 0.66 Mb.

    Bosh sahifa
    Aloqalar

        Bosh sahifa



    Performance Tuning Guidelines for Windows Server 2008 R2 October 15, 2010

    Download 0.66 Mb.