Personal Computer User Manual
Table Of Contents
- Safety Instructions
- Preface
- Getting Started
- PowerEdge Cluster Components
- Minimum System Requirements
- Basic Installation Procedure
- Adding Peripherals Required for Clustering
- Setting Up the Cluster Hardware
- Cabling the Cluster Hardware
- Updating System BIOS/Firmware for Clustering
- Setting Up the Shared Storage Subsystem Hard-Disk...
- Setting Up the Internal SCSI Hard-Disk Drives
- Installing and Configuring Windows NT Server Enter...
- Installing and Configuring the Microsoft Cluster S...
- Installing PowerEdge Cluster Applications
- Checking the System
- Cabling the Cluster Hardware
- Configuring the Cluster Software
- Low-Level Software Configuration
- High-Level Software Configuration
- Installing Intel LANDesk® Server Manager
- Choosing a Domain Model
- Static IP Addresses
- IPs and Subnet Masks
- Configuring Separate Networks on a Cluster
- Changing the IP Address of a Cluster Node
- Naming and Formatting Shared Drives
- Driver for the RAID Controller
- Updating the NIC Driver
- Adjusting the Paging File Size and Registry Sizes...
- Verifying the Cluster Functionality
- Uninstalling Microsoft Cluster Server
- Removing a Node From a Cluster
- Setting Up the Quorum Resource
- Using the ftdisk Driver
- Cluster RAID Controller Functionality
- Running Applications on a Cluster
- Troubleshooting
- Upgrading to a Cluster Configuration
- Stand-Alone and Rack Configurations
- Cluster Data Sheet
- PowerEdge Cluster Configuration Matrix
- Regulatory Compliance
- Safety Information for Technicians
- Warranties and Return Policy
- Index

Troubleshooting 5-3
One or more of the SCSI control-
lers are not detected by the
system.
The controllers have con-
flicting SCSI IDs.
Change one of the controller SCSI IDs so that the
ID numbers do not conflict. The controller in the
primary node should be set to SCSI ID 7, and the
controller in the secondary node should be set to
SCSI ID 10. Refer to Chapter 3 for instructions
for setting the SCSI IDs on the nodes.
One of the nodes can access one
of the shared hard-disk drives,
but the second node cannot.
The drive letters assigned to
the hard-disk drive differ
between the nodes.
The SDS 100 storage system
has not been upgraded with
the cluster-specific firm-
ware.
The SCSI cable between the
node and the shared storage
subsystem is faulty or not
connected.
Change the drive letter designation for the shared
hard-disk drive so that it is identical in all nodes.
Ensure that the SMB-connected node on the clus-
ter is running the cluster-specific firmware.
Upgrade the SDS 100 firmware by powering
down the cluster and then starting it up again.
During start-up, the cluster-specific firmware on
the node checks the version of the SDS 100 firm-
ware. If the SDS 100 is found to be running the
wrong version of firmware, the node proceeds to
upgrade it automatically with the correct firm-
ware version.
Attach or replace the SCSI cable between the
cluster node and the shared storage subsystem.
Server management functions
are unavailable when both nodes
are functional.
The SMB cable is not con-
nected properly to the SDS
100 storage system(s).
Check the SMB connections. The primary node
should be connected to the first storage system,
and the second storage system (if present) should
be connected to the first storage system. Refer to
Chapter 2 for information about connecting the
SMB cable.
Clients are dropping off of the
network while the cluster is fail-
ing over.
The service provided by the
recovery group becomes
temporarily unavailable to
clients during fail-over.
Clients may lose their con-
nection if their attempts to
reconnect to the cluster are
too infrequent or if they end
too soon.
Reconfigure the dropped client to make longer
and more frequent attempts to reconnect back to
the cluster.
Table 5-1. Troubleshooting
(continued)
Problem Probable Cause Corrective Action










