Clusters of computer systems have been built and used for over a decade [1]. Pfister [2] defines a cluster as “a parallel or distributed system that consists of a collection of interconnected whole computers, that is utilized as a single, unified computing resource”. In general, the goal of a cluster is to make it possible to share a computing load over several systems without either the users or system administrators needing to know that more than one system is involved.
General (lot. generalis - umumiy, bosh) - qurolli kuchlardagi harbiy unvon (daraja). Dastlab, 16-a.da Fransiyada joriy qilingan. Rossiyada 17-a.ning 2-yarmidan maʼlum. Oʻzbekiston qurolli kuchlarida G.
If any component in the system, hardware or software fails the user may see degraded performance, but will not lose access to the service. Ideally, if more processing power is needed the user simply “plugs in a new component”, and presto, the performance of the system as a whole improves. Windows NT Clusters are, in general, shared nothing clusters. This means that while several systems in the cluster may have access to a device or resource, it is effectively owned and managed by a single system at a time. Currently the SCSI bus with multiple initiators is used as the storage connection. Fiber Channel will be supported in the near future.
To appreciate our design approach it is helpful to understand the business goals for the development team. The principal goal was to develop a product that addresses a very broad, high volume market, rather than specific market segments. Marketing studies showed a huge demand for higher availability in small businesses as data bases and electronic mail have become essential to their daily operation. These businesses cannot afford specialized computer operations staff, so ease of installation and management were defined as key product advantages. Providing improved availability for existing applications easily, as well as providing tools to enhance other applications to take advantage of the cluster features were also requirements. On the other hand we see Windows NT moving into larger, higher performance systems so the base operating system needed to be extended to provide a foundation to build large scalable clusters over several years.
Starting with these goals, we formed a plan to develop clusters in phases over several years. The first phase, now completed, installed the underpinnings into the base operating system, and built the foundation of the cluster components, providing enhanced availability to key applications using storage accessible by two nodes. Later phases will allow for much larger clusters, true distributed applications, higher performance interconnects, distributed storage, load balancing, etc. From a technical viewpoint, we needed to provide a way to know which systems were operating as a part of a cluster, what applications were running, and the current state of health of those applications. This information needed to be available in all nodes of the cluster so that if a single node failed we would know what it was doing, and what should be done about it. Advertising and locating services in the cluster is an especially complex issue and we also needed to develop tools to easily install and administer the cluster as a whole.
Before delving into the design details we first introduce some concepts and terms used throughout the paper. Members of a cluster are referred to as nodes or systems and the terms are used interchangeably. The Cluster Service is the collection of software on each node that manages all cluster specific activity. A Resource is the canonical item managed by the Cluster Service, which sees all resources as identical opaque objects. Resources may include physical hardware devices such as disk drives and network cards, or logical items such as logical disk volumes, TCP/IP addresses, entire applications, and databases. A resource is said to be on-line on a node when it is providing its service on that specific node. A group is a collection of resources to be managed as a single unit. Usually a group contains all of the elements needed to run a specific application, and for client systems to connect to the service provided by the application. Groups allow an administrator to combine resources into larger logical units and manage them as a unit. Operations performed on a group affect all resources contained within that Group.
One key goal of the project was to make the cluster service a separate, isolated set of components. This reduces the possibility of introducing problems into the existing code base and avoids complex schedule dependencies. We did however, need to make changes in a few areas to enable the cluster features. The ability to dynamically create and delete network names and addresses was added to the networking code. The file system was modified to add a dismount capability, closing open files. The I/O subsystem needed to deal with disks and volume sets being shared between multiple nodes. Apart from these, and a few other minor modifications, cluster capabilities were built on top of the existing operating system features.
Figure 1., shows an overview of the components and their relationships in a single system of a Windows NT cluster.
The Cluster Service controls all aspects of cluster operation on a cluster system. It is implemented as a Windows NT service and consists of six closely related, cooperating components:
-
The Node Manager handles cluster membership, watches the health of other cluster systems.
-
Configuration Database Manager maintains the cluster configuration database.
-
Resource Manager/Failover Manager makes all resource/group management decisions and initiates appropriate actions, such as startup, restart and failover.
-
Event Processor connects all of the components of the Cluster Service, handles common operations and controls Cluster Service initialization.
-
Communications Manager manages communications with all other nodes of the cluster.
-
Global Update Manager - provides a global update service that is used by other components within the Cluster Service.
-
The Resource Monitor, strictly speaking, is not part of the cluster service. It runs in a separate process, or processes, and communicates with the Cluster Service via Remote Procedure Calls (RPC). It monitors the health of each resource via callbacks to the resources. It also provides the polymorphic interface between generic calls like online and the specific online operation for that resource.
-
The time service maintains consistent time within the cluster but is implemented as a resource rather than as part of the cluster service itself.
We need to elaborate on resources and their properties, before discussing each of the above components in detail.
Resources
Resources are implemented as a Dynamically Linked Library (DLL) loaded into the Resource Monitor’s address space. Resources run in the system account and are considered privileged code. Resources can be defined to run in separate processes, created by the Cluster Service when creating resources.
Resources expose a few simple interfaces and properties to the Cluster Service. Resources may depend on other resources. A resource is brought on-line only after the resources it depends on are already on-line, and it is taken off-line before the resources it depends on. We prevent circular dependencies from being introduced. Each resource has an associated list of systems in the cluster on which this resource may execute. For example, a disk resource may only be hosted on systems that are physically connected to the disk. Also associated with each resource is a local restart policy, defining the desired action in the event that the resource cannot continue on the current node. All Microsoft provided resources run in a single process, while other resources will run in at least one other process.
The base product provides resource DLLs for the following resources:
-
Physical disk
-
Logical volume (consisting of 1 or more physical disks)
-
File and Print shares
-
Network addresses and names
-
Generic service or application
-
Internet Server service
Resource Interface
The resource DLL interfaces are formally specified and published as part of the Cluster software development kit (SDK). This interface allows application developers to make resources of their applications. For example, a database server application could provide a database resource to enable the Cluster Service to fail over an individual data-base from the server on one system to the server on the other. Without a database resource, the Cluster Service can only fail over the entire server application (and all its databases). Since a resource can be active on only one system in the cluster, this would limit the cluster to a single running instance of the database server. Providing a database resource makes the database the basic fail over unit instead of the server program itself. Once the server application is no longer the resource, multiple servers can be simultaneously running on different systems in the cluster, each with its own set of databases. This is the first step towards achieving cluster-wide scalability. Providing a resource DLL is a requirement for any cluster-aware application.
Resources within groups
A Group can be ‘owned’ by only one system at a time. Groups can be failed over or moved from one system to another as atomic units. Individual resources within a Group must be present on the system which currently ‘owns’ the Group. Therefore, at any given instance, different resources within the same group cannot be owned by different systems across the cluster. Each group has an associated cluster-wide policy that specifies which system the group prefers to run on, and which system the group should move to in case of a failure. In the first release, every Group will have its own network service name and address, used by clients to bind to services provided by resources within the Group. Future releases will use a dynamic directory service to eliminate the requirement for a network service name per Group.
Cluster Service States
From the point of view of other systems in the cluster and management interfaces, nodes in the cluster may be in one of three distinct states. These states are visible to other systems in the cluster, are really the state of the Cluster Service and are managed by the Event Processor.
Offline - The system is not a fully active member of the cluster. The system and its Cluster Service may or may not be running.
Online - The system is a fully active member of the cluster. It honors cluster database updates, contributes votes to the quorum algorithm, maintains heartbeats, and can own and run Groups.
Paused - The system is a fully active member of the cluster. It honors cluster database update, contributes votes to the quorum algorithm, maintains heartbeats, but it cannot own or run Groups. The Paused state is provided to allow certain maintenance to be performed. Online and Paused are treated as equivalent states by most of the cluster software.
Node Manager
The Node Manager maintains cluster membership, and sends periodic messages, called heartbeats, to its counterparts on the other systems of the cluster to detect system failures. It is essential that all systems in the cluster always have exactly the same view of cluster membership. In the event that one system detects a communication failure with another cluster node it broadcasts a message to the entire cluster causing all members to verify their view of the current cluster membership. This is called a regroup event. Writes to potentially shared devices must be frozen until the membership has stabilized. If a Node Manager on a system does not respond, it is removed from the cluster and its active Groups must be failed over (“pulled”) to an active system. Note that the failure of a Cluster Service also causes all of its local managed resources to fail.
Configuration Database Manager
The Configuration Database Manager implements the functions needed to maintain the Cluster Configuration Database. This database contains information about all of the physical and logical entities in a cluster. These entities are the Cluster itself, Systems, Resource Types, Groups, and Resources. Persistent and volatile information is used to track the current and desired state of the cluster. The Database Managers on each of the cluster systems cooperate to maintain configuration information consistently across the cluster. One phase commits are used to ensure the consistency of cluster data base in all nodes. The Configuration Database Manager also provides an interface to the Configuration Database for use by the other Cluster Service components. This interface is similar to the registry interface exposed by the Win32 API [3] set with the key difference being that changes made in one node of the cluster are atomically distributed to all nodes in the cluster that are affected.
Resource Manager/Failover Manager
The Resource Manager/Failover Manager is responsible for stopping and starting resources, managing resource dependencies, and for initiating fail over of Groups. It receives resource and system state information from the Resource Monitor and the Node Manager. It uses this information to make decisions about Groups. The Failover Manager is responsible for deciding which systems in the cluster should ‘own’ which Groups. When Group arbitration finishes those systems that ‘own’ individual Groups turn control of the resources within the Group over to their respective Resource Managers. When failures of resources within a Group cannot be handled by the ‘owning’ system, then the Failover Managers re-arbitrate for ownership of the Group.
Pushing a Group
If a resource fails, the Resource Manager may choose to restart the resource, or to take the resource offline along with its dependent resources. If it takes the resource offline, it will indicate to the Failover Manager that the Group should be restarted on another system in the cluster. This is called pushing a Group to another system. A cluster administrator may also manually initiate such a Group transfer. The algorithm for both situations is identical, except that resources are gracefully shutdown for a manually initiated fail over, while they are forcefully shut down in the failure case.
Pulling a group
When an entire system in the cluster fails, its Groups must be pulled from that system to another system. This process is similar to pushing a Group, but without the shutdown phase on the failed system. The complication here is determining what Groups were running on the failed system and which system should take ownership of the various Groups. All systems capable of hosting the Groups negotiate among themselves for ownership. This negotiation is based on system capabilities, current load, application feedback or group “system preference list”. Once negotiation of the Group is complete, all members of the cluster update their data bases and thus keep track of which system owns the Groups.
Failback
When a system comes back on-line, the Failover Manager can decide to move some groups back to that system. We refer to this action as failback. Groups must have a preferred owner defined to fail back. Groups for which the new system is the “preferred” owner will be pushed from the current owner to the new system. Protection, in the form of a timing window, is included to defend against the case where a system continually crashes as soon as it tries running an important application.
Event Processor
The Event Processor is the electronic switchboard that propagates events to and from applications and other components within the Cluster Service. The Event processor also performs miscellaneous services such as delivering signal events to cluster aware applications and maintaining cluster objects.
The Event Processor is responsible for initializing the Cluster Service. It is the initial entry point for the Cluster Service. After initialization is complete the external state of the system is Offline. The Event Processor will then call the Node Manager to begin the process of joining or forming a cluster.
Time Services
All systems in the cluster must maintain a consistent view of time. A special resource implements the Time Service. The node on which this resource is on line is called the Time Source. There is always a Time Source in the cluster. The goal is to ensure that the cluster members have a consistent view of the time, rather than the accurate time. The administrator can influence the decision and cause a particular system, or systems to be used as the Time Source.
Cluster Service Communications
The Cluster Services in each node of a cluster are in constant communication with each other. Communication in small clusters is fully connected. That is, all nodes are in direct communication with all other nodes. Intra-cluster communication uses RPC mechanisms to guarantee reliable, exactly once delivery of messages.
Application Interface
The Cluster Service exports an interface for administration of cluster resources, systems, and the cluster itself. Applications and administration tools, such as the Cluster Administrator, can call these interfaces using remote procedure calls whether they are running in the cluster or on an external system. The administration interface is broken down into several categories, each associated with a particular cluster component: systems, resources, and the cluster itself.
Creating a Cluster
When a system administrator wishes to create a new cluster, she will run a cluster installation utility on the system to become the first member of the cluster. For a new cluster, the database is created and the initial cluster member is added. The administrator will then configure any devices that are to be managed by the cluster software. We now have a cluster with a single member. The next step is to run the installation procedure on the each of the other members of the cluster. The only difference is that the name of the existing cluster must be entered, and the new node will automatically receive a copy of the existing cluster database.
Joining a Cluster
Following a restart of a system, the cluster service is started automatically. The system configures and mounts local, non-shared devices. Cluster-wide devices must be left offline while booting because another node may be using them. The system uses a ‘discovery’ process to find the other members of the cluster. When the system discovers any member of the cluster, it performs an authentication sequence. The existing cluster member authenticates the newcomer and returns a status of success if everything checks out. If the node is not recognized as a member, or the password is wrong, then the request to join is refused. The database in the arriving node is checked and if it is out of date, it is sent an updated copy. The joining system can now use this shared database to find shared resources and to bring them online as needed.
Forming a Cluster
If a cluster is not found during the discovery process, a system will attempt to form its own cluster. To form a cluster, the system must gain access to a quorum resource. The quorum resource is used as a tie-breaker when booting a cluster and also to protect against both nodes forming their own cluster if communication fails in a two node cluster. The quorum resource is a special resource, often, but not necessarily, a disk, that a node must arbitrate for and have possession of before it can form a cluster.
Leaving a Cluster
When leaving a cluster, a cluster member will send a ClusterExit message to all other members in the cluster, notifying them of its intent to leave the cluster. The exiting cluster member does not wait for any responses and immediately proceeds to shutdown all resources and close all connections managed by the cluster software.
Sending a message to the other systems in the cluster when leaving saves the other systems from discovering the absence and having to go though a regroup effort to reestablish cluster membership.
Summary
At the time of writing, we have a fully functional implementation of the first phase in beta test. We have modified the base operating system, and built a foundation on which to add features over the next several years. We have attempted to follow the principles of a “shared nothing” cluster model throughout the design. In the process we have gained a tremendous amount of insight into the problems of loosely coupled distributed systems that will be applied to future releases of our product.
Future Directions
Our first product is clearly missing some key features to enable the top-level goal of “just plugging in another system”, when increased performance is needed. Our base design supports larger clusters, but the test and verification effort made it necessary to constrain the initial release to two nodes. We will use a hierarchical approach to clusters with more than N nodes, where N will depend on the communications overhead. Finding and binding to services easily and transparently will be solved using a two-tiered approach consisting of a global directory service coupled with a highly dynamic, cluster specific service. Coarse grain load balancing software will provide the directory with the names of the systems to which incoming clients can connect. Truly distributed applications will be supported by exposing a set of cluster-wide communication and transaction services. Changes to the NT networking architecture are in design to enable low latency, high bandwidth cluster interconnects such as Tandem’s ServerNet [4] and Digital’s MemoryChannel [5].
These interconnects are capable of transferring data directly to and from the application data buffers and eliminate the need for and overhead associated with traditional network stacks. Very high performance shared nothing clusters require some form of I/O shipping, where I/O requests are sent by the cluster communication service to the system physically hosting the disk drive.
Acknowledgements
Too many people have contributed to the Cluster project to thank here. However, we would be remiss not to mention the help of a few key individuals. Dave Cutler convinced us to start and join the project and helped set the direction for the initial design. Alan Rowe and others from Tandem Computer provided the regroup algorithm and invaluable experience in the area of fault tolerance and high availability. David Potter developed the administration tools that make the cluster so easy to use.
References
[1] N. Kronenberg, H. Levy, and W. Strecker, “VAXclusters: A Closely Coupled Distributed System,” ACM Transactions on Computer Systems, vol. 4, no. 2 (May 1986)
[2] G. Pfister, In Search of Clusters( Prentice-Hall, Inc., 1995) page 72
[3] Win32 Programmers Reference, Microsoft Press, Redmond, Washington.
[4] Tandem Servernet, use this URL http://www.tandem.com/MENU_PGS/SNET_PGS/TECHINFO.HTM
[5] R. Gillett, M. Collins, and D. Pimm, Overview of Memory Channel Network for PCI. Digest of Papers, CompCon 96 pp 244-249