You can optimize network throughput and resource usage by using network adapter tunings (when available and exposed by the network adapter). Keep in mind that the correct set of tunings depends on the network adapter, workload, host-computer resources, and performance goals.
Enable Offload Features
It is almost always beneficial to turn on network adapter offload features. In some instances, however, the network adapter may not be powerful enough to handle the offload capabilities at high throughput. For example, enabling LSO can lower the maximum sustainable throughput on some network adapters. However, if the reduced throughput is not expected to be a limitation, offload capabilities should be enabled even for such network adapters. Note that some network adapters require offload features to be enabled for send and receive paths independently.
Network Adapter Resources
Several network adapters allow the configuration of resources by the administrator. Receive buffers and send buffers are among the parameters that may be set. Some network adapters actively manage their resources, and there is no need to set such parameters for these network adapters.
Some network adapters expose buffer coalescing parameters (sometimes separately for send and receive buffers) for control over interrupt moderation. It is important to consider buffer coalescing when the network adapter does not perform adaptive interrupt moderation.
TCP parameters that can be adjusted for high throughput scenarios are listed in Table 1.
Table 1. TCP Parameters
This value determines the maximum amount of data (in bytes) that can be outstanding on the network at any given time. It can be set to any value from 1 to 65,535 bytes by using the following registry entry:
The default for a gigabit interface is set to approximately 65,535 (rounded down to the nearest multiple of full TCP packets), 16,384 for a 100 Mbps link, and 8,192 for all interfaces of lower speeds (for example, modems), again rounded down. Ideally, this value should be set to the product of end-to-end network bandwidth (in bytes/s) and the round-trip delay (in seconds), also referred to as the bandwidth-delay product. This value should be set according to the amount of TCP data expected to be received by the computer.
On a link with high bandwidth-delay product (for example, satellite links), there may be a need to increase the window size to above 64 K. For that, you need to enable TCP Options as specified in RFC 1,323 by appropriately setting the following registry entry: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters
To enable window sizes of greater than 65,535, this registry entry should be set to 1 (one). After this change has been made, the registry entry controlling TCPWindowSize can be set to values larger than 64K (up to 1GB).
This value determines the size of the hash table holding the state of TCP connections. The default value is 128 multiplied by the square of the number of processors in the system. When a large concurrent connection load is expected on the system, set the following registry entry to a high value to improve the performance of the hash table:
The maximum value is 0x10000 (65,536). It is recommended that you consider using the maximum value for large servers which you expect to carry high connection load. Keep in mind that the table uses nonpaged pool, so do not set too high a value for the parameter if the server does not have much available nonpaged pool or if you do not anticipate a high-connection load.
By default, the table holding TCP connection states has as many partitions as the square of the number of processors. In most cases, the setting is appropriate and results in lowered contention on the tables. However, the default may be too high for 16 or more processors, and may result in too much CPU usage. In that case, set the following registry entry to a value lower than the square of the number of processors:
A port is used whenever an active connection is used from a computer. Given the default value of available user mode ports (5,000 for each IP address) and TCP time-wait requirements, it may be necessary to make more ports available on the system. You can set the following registry entry to as high as 0xfffe (65534):
Selecting an HW RAID type and backup procedure that provide the required performance and data recovery capabilities.
Choose a storage adapter with WHQL certification.
Estimate the Amount of Data to be Stored
When you estimate the amount of data to be stored on a new file server, you need to consider these issues:
The amount of data currently stored on any file servers that will be consolidated onto the new file server.
If the file server will be a replica member, the amount of replicated data that will be stored on the new file server.
The amount of data that you will need to store on the file server in the future.
A general guideline is to plan for faster growth in the future than you experienced in the past.
General (lot. generalis - umumiy, bosh) - qurolli kuchlardagi harbiy unvon (daraja). Dastlab, 16-a.da Fransiyada joriy qilingan. Rossiyada 17-a.ning 2-yarmidan maʼlum. Oʻzbekiston qurolli kuchlarida G.
Investigate whether your organization plans to hire a large number of people, whether any groups in your organization are planning large projects that will require extra storage, and so on.
You must also take into account the amount of space used by operating system files, applications, RAID redundancy, log files, and other factors. Table describes some factors that affect file server capacity.
At least 1.5 GB. To allow space for optional components, future service packs, and other items, plan to allow an additional 3 GB to 5 GB for the operating system volume.
1.5 times the amount of RAM by default.
Depending on the memory dump file option that you have chosen, the amount of disk space required can be as large as the amount of physical memory plus 1 MB.
Varies according to the application, which can include antivirus, backup, and disk quota software, database applications, and optional components such as Recovery Console, Services for Unix, and Services for NetWare.
Varies according to the application that creates the log file. Some applications allow you to configure a maximum log file size. You must ensure that you have adequate free space to store the log files.
Varies; see Choosing the Raid Level later in this document for more information.
Ten percent of the volume by default, although increasing this size is recommended.
Storage Array Selection
There are many considerations in choosing a storage array and adapters. The choices include the type of storage arrays being used, including the following options.
Table 3. Options for Storage Array Selection
Fibre Channel or SCSI
Fibre Channel allows long glass or copper cables to connect the storage array to the system while providing high bandwidth.
SCSI provides very high bandwidth but has cable length restrictions.
HW RAID capabilities
It is important for the storage controllers to offer HW RAID capabilities. RAID levels 0, 1, and 5 are described in Table 4.
Maximum storage capacity
Total storage area.
Bandwidth at which storage can be accessed which is determined by the number of physical disks in the array, speed of controllers, type of disk (for example, SCSI or Fibre Channel), HW RAID, and adapters used to connect the storage array to system.
HW RAID Levels
Most storage arrays provide some HW RAID capabilities, including the following RAID options.
RAID 0 presents a logical disk that stripe disk accesses over a set of physical disks.
Overall this is the fastest HW RAID configuration.
This is the least expensive RAID configuration, because data is not duplicated.
RAID 0 does not provide additional data recovery mechanisms as does RAID 1 and RAID 5.
RAID 1 presents a logical disk that is mirrored to another disk.
RAID 1 is slower than RAID 0 for write operations, because the data needs to be written to two or more physical disks, and the latency is the slowest of the write operations.
In some cases, RAID 1 can provide faster read operations than RAID 0 because it can read from the least busy physical disk.
RAID 1 is the most expensive in terms of physical disks, because two or more complete copies of the data are stored.
RAID 1 is the fastest in terms of recovery time after a physical disk failure, because the second physical disk is available for immediate use. A new mirror physical disk can be installed while full data access is permitted.
RAID 5 presents a logical disk that has parity information written to other disks as FIgure 3 shows.
RAID 5 uses independent data disks with distributed parity blocks.
RAID 5 is slower then RAID 0, because each logical disk write I/O results in data being written to multiple disks. However, RAID 5 provides additional data recovery capabilities over RAID 0, because data can be reconstructed from the parity.
RAID 5 requires additional time (compared to RAID 1) to recovery from a lost physical disk, because the data on the disk needs to be rebuilt from parity information stored on other disks.
RAID 5 is less expensive than RAID 1, because a full copy of the data is not stored on disk.
Other combinations of RAID exist including RAID 0 1, Raid 10 and Raid 50.
The following figure illustrates RAID 5.
Figure 3 RAID5 Overview
Choosing the RAID Level
Each RAID level is trade-off between the following factors:
Availability and reliability
You can determine the best RAID level for your file servers by evaluating the read and write loads of the various data types and then deciding how much you are willing to spend to achieve the performance and availability/reliability that your organization requires. Table describes four common RAID levels, their relative costs, performance, availability and reliability, and their recommended uses.
Table 5. RAID Trade-Offs
Striped with Parity
Minimum number of disks
Usable storage capacity
where N is the number of disks
None. Losing a single disk causes all data on the volume to be lost.
Can lose multiple disks as long as a mirrored pair isn’t lost.
Can tolerate the loss of one disk.
Can lose multiple disks as long as a mirrored pair is not lost. Varies according to the number of mirrored pairs in the array. 1
Generally improved by increasing concurrency.
Good read performance
Generally improved by increasing concurrency.
Improvement from increasing concurrency and dual sources for each request.
Generally improved by increasing concurrency.
Worse than JBOD (between 20% and 40% for most workloads)
Poor unless full-stripe writes (large requests) Can be as low as ~25% of JBOD (4:1 requests).
Can be better or worse depending on request size, hot spots (static or dynamic), and so on.
1If a disk fails, failure of its mirrored partner prior to replacement will cause data loss. However, the failure of any other member disk does not cause data loss.
If you use more than two disks, RAID 0 1 is almost always a better solution than RAID 1.
When determining the number of disks that should be included in RAID 0, RAID 5, and RAID 0 1 virtual disks, consider the following information:
Performance increases as you add disks.
Reliability, in terms of mean time To failure (MTTF ) of two disks, decreases as you add disks for RAID 5 or a single disk for RAID 0.
Usable storage capacity increases as you add disks, but so does cost.
Stripe unit size. Software solution is fixed at 64 KB. Hardware solutions may range from 4 KB to 1 MB. Ideal stripe unit size maximizes the disk activity without unnecessarily breaking up requests (so that multiple disks are required to service a single request). For example:
One stream of sequential requests (large) on JBOD would keep only one disk busy at a time. To keep all disks busy, the stripe unit needs to be equal to 1/N where N is the request size.
For N streams of small random requests, if N is greater than the number of disks, and if there are no hotspots, striping will not increase performance. However, if there are hotspots, the stripe unit size needs to maximize the chance that a request will not be split, while minimizing the chance of a hotspot falling entirely within one or two stripe units. You might pick a low multiple of the request size, like 5X or 10X, especially if the requests are on some boundary (for example, 4 KB or 8 KB).
For fewer streams than disks, tou need to split the streams so that all disks are kept busy. Interpolate from the previous two examples. For example, if you have 10 disks and 5 streams, split each request in half (use a stripe unit size equal to half the request size).
Determining the Volume Layout
Whenever possible use separate volumes for each data type. For example, use one volume for the operating system and paging space, and one or more volumes for shared user data, applications, and log files.
Place different data types in separate volumes on different virtual disks. Using separate virtual disks is especially important for any data types that create heavy write loads, such as log files, so that a single set of disks (that compose the virtual disk) can be dedicated to handling the disk I/O created by the updates to the log files. Placing the paging file on a separate virtual disk can provide some minor improvements in performance, but typically not enough to make it worth the extra cost.
To gain some performance benefits while minimizing cost, it is often useful to combine different data types in one or more volumes on the same virtual disks. A common method is to store the operating system and paging space on one virtual disk and the user data, applications, and log files in one or more volumes on the remaining virtual disk.
Some storage adapters are capable of moderating how frequently they interrupt the host processors to indicate disk activity (or its completion). Moderating interrupts can often result in reduction in CPU load on the host, but unless interrupt moderation is performed intelligently; the CPU savings may come at the cost of increases in latency.
Table 6. Options for Interrupt Moderation
Adapters that are 64-bit-capable can perform DMA operations to and from high physical memory locations (above 4 GB).
Copper and fiber (glass) adapters
Copper adapters generally have the same performance as their fiber counterparts, and both copper and fiber are available on some fibre channel adapters. Some environments are better suited for either copper or glass adapters.
Dual or quad port SCSI adapters
Some SCSI adapters provide 2 or 4 SCSI buses on a single adapter card. This is often necessary due to SCSI limitations on the number of disks that can be connected to a SCSI bus. Fibre channel disks generally do not have limits on the number of disks connected to an adapter.