Instruction Manual
Table Of Contents
- HP ProLiant SB460c SAN Gateway Storage Server
- Table of Contents
- About this guide
- 1 Storage management overview
- 2 File server management
- File services features in Windows Storage Server 2003 R2
- File services management
- Volume shadow copies
- Folder and share management
- File Server Resource Manager
- Other Windows disk and data management tools
- Additional information and references for file services
- 3 Print services
- 4 Microsoft Services for Network File System (MSNFS)
- MSNFS Features
- MSNFS use scenarios
- MSNFS components
- Administering MSNFS
- Server for NFS
- User Name Mapping
- Microsoft Services for NFS troubleshooting
- Microsoft Services for NFS command-line tools
- Optimizing Server for NFS performance
- Print services for UNIX
- MSNFS components
- 5 Other network file and print services
- 6 Enterprise storage servers
- 7 Cluster administration
- Cluster overview
- Cluster terms and components
- Cluster concepts
- Cluster planning
- Preparing for cluster installation
- Cluster installation
- Configuring cluster service software
- Cluster groups and resources, including file shares
- Print services in a cluster
- Advanced cluster administration procedures
- Additional information and references for cluster services
- 8 Troubleshooting, servicing, and maintenance
- 9 System recovery
- A Regulatory compliance and safety
- Index

Advanced cluster administration procedures
Failing over and failing back
As previously mentioned, when a node goes offline, all resources dependent on that node are
automatically failed over to another node. Processing continues, but in a reduced manner, because
all operations must be processed on the remaining node(s). In clusters containing more than two
nodes, additional fail over rules can be applied. For instance, groups can be configured to fail over
different nodes to balance the additional work load imposed by the failed node. Nodes can be
excluded from the possible owners list to prevent a resource from coming online on a particular node.
Lastly the preferred owners list can be ordered, to provide an ordered list of failover nodes. Using
these tools, the failover of resources can be controlled with in a multinode cluster to provide a controlled
balanced failover methodology that balances the increased work load.
Because operating environments differ, the administrator must indicate whether the system will
automatically fail the resources (organized by resource groups) back to their original node or will
leave the resources failed over, waiting for the resources to be moved back manually.
NOTE:
If the storage server is not set to automatically fail back the resources to their designated owner, the resources
must be moved back manually each time a failover occurs.
Restarting one cluster node
CAUTION:
Restarting a cluster node should be done only after confirming that the other node(s) in the cluster are
functioning normally. Adequate warning should be given to users connected to resources of the node being
restarted. Attached connections can be viewed through the Management Console on the storage server
Desktop using Terminal Services. From the Management Console, select
File Sharing > Shared Folders > Sessions.
The physical process of restarting one of the nodes of a cluster is the same as restarting a storage
server in single node environment. However, additional caution is needed.
Restarting a cluster node causes all cluster resources served by that node to fail over to the other nodes
in the cluster based on the failover policy in place. Until the failover process completes, any currently
executing read and write operations will fail. Other node(s) in the cluster will be placed under a
heavier load by the extra work until the restarted node comes up and the resources are moved back.
Shutting down one cluster node
CAUTION:
Shutting down a cluster node must be done only after confirming that the other node(s) in the cluster are
functioning normally. Adequate warning should be given to users connected to resources of the node being
shutdown.
Cluster administration108