3.7.0 HP StorageWorks HP Scalable NAS File Serving Software administration guide - HP Scalable NAS 3.7 for Linux (AG513-96002, October 2009)

when the service is already stopped, without considering this to be an error. In both
of these cases, the script should exit with a zero exit status. This behavior is necessary
because HP Scalable NAS runs the Start and Stop scripts to establish the desired
start/stop activity, even though the service may actually have been started by
something other than HP Scalable NAS before ClusterPulse was started.
The Start and Stop scripts must also handle recovery from events that may cause them
to run unsuccessfully. For example, if the system runs out of swap space while running
a Start script, the script will fail and exit non-zero. The service could then become
active on another server, causing the Stop script to run on the original server even
though the Start script did not complete successfully.
To configure scripts from the command line, use these options:
--recoveryScript <script> --recoveryTimeout <seconds>
--startScript <script> --startTimeout <seconds>
--stopScript <script> --stopTimeout <seconds>
Use custom scripts to modify start/stop activities
Some built-in monitors perform starting or stopping activities. If you need to take an
action before or after the starting or stopping activity, you can create a custom Start
or Stop script for the action and specify it on the Scripts tab for the monitor.
The default order for starting is:
Run the monitors starting activities (if any)
Run the custom Start script (if any)
If you want to reverse this order, preface the Start script with the prefix [pre] on
the Scripts tab.
The default order for stopping is:
Run the custom Stop script (if any)
Run the monitors stopping activities (if any)
If you want to reverse this order, preface the Stop script with the prefix [post] on
the Scripts tab.
Event severity
By default, HP Scalable NAS treats the failure or timeout of a Start or Stop script as
a failure of the associated monitored service and may initiate failover of the associated
virtual hosts. Configuration errors can also cause this behavior.
Configure service monitors324