Installation guide
| Suspend Mode [Disabled] | |
| HDD Power Down [Disabled] | |
| Soft-Off by PWR-BTTN [Instant-Off | |
| CPU THRM-Throttling [50.0%] | |
| Wake-Up by PCI card [Enabled] | |
| Power On by Ring [Enabled] | |
| Wake Up On LAN [Enabled] | |
| x USB KB Wake-Up From S3 Disabled | |
| Resume by Alarm [Disabled] | |
| x Date(of Month) Alarm 0 | |
| x Time(hh:mm:ss) Alarm 0 : 0 : | |
| POWER ON Function [BUTTON ONLY | |
| x KB Power ON Password Enter | |
| x Hot Key Power ON Ctrl-F1 | |
| | |
| | |
+---------------------------------------------|-------------------+
This example shows ACPI Fu n ct io n set to En ab led , and So f t - O f f b y PWR- BT T N set to
In st an t - O f f .
2.4 .3. Disabling ACPI Complet ely in the grub.conf File
The preferred method of disabling ACPI Soft-Off is with chkconfig management (Section 2.4.1,
“ Disabling ACPI Soft-Off with chkconfig Management” ). If the preferred method is not effective for
your cluster, you can disable ACPI Soft-Off with the BIOS power management (Section 2.4.2,
“ Disabling ACPI Soft-Off with the BIOS” ). If neither of those methods is effective for your cluster, you
can disable ACPI completely by appending acpi=off to the kernel boot command line in the
grub.conf file.
Important
This method completely disables ACPI; some computers do not boot correctly if ACPI is
completely disabled. Use this method only if the other methods are not effective for your cluster.
You can disable ACPI completely by editing the grub.conf file of each cluster node as follows:
1. Open /boot/grub/grub.conf with a text editor.
2. Append acpi=off to the kernel boot command line in /boot/grub/grub.conf (refer to
Example 2.2, “ Kernel Boot Command Line with acpi=off Appended to It” ).
3. Reboot the node.
4. When the cluster is configured and running, verify that the node turns off immediately when
fenced.
Note
You can fence the node with the fence_node command or Co n g a.
Example 2.2. Kern el Bo o t Co mman d Lin e wit h acpi=off Ap p en d ed t o It
Chapt er 2 . Before Config uring a Red Hat Clust er
25