Platform LSF Reference Version 6.2
lsb.queues Structure
Platform LSF Reference
402
When the total CPU time for the whole job has reached the limit, a SIGXCPU signal is
sent to all processes belonging to the job. If the job has no signal handler for SIGXCPU,
the job is killed immediately. If the SIGXCPU signal is handled, blocked, or ignored by
the application, then after the grace period expires, LSF sends SIGINT, SIGTERM, and
SIGKILL to the job to kill it.
If a job dynamically spawns processes, the CPU time used by these processes is
accumulated over the life of the job.
Processes that exist for fewer than 30 seconds may be ignored.
By default, if a default CPU limit is specified, jobs submitted to the queue without a job-
level CPU limit are killed when the default CPU limit is reached.
If you specify only one limit, it is the maximum, or hard, CPU limit. If you specify two
limits, the first one is the default, or soft, CPU limit, and the second one is the maximum
CPU limit. The number of minutes may be greater than 59. Therefore, three and a half
hours can be specified either as 3:30 or 210.
If no host or host model is given with the CPU time, LSF uses the default CPU time
normalization host defined at the queue level (DEFAULT_HOST_SPEC in
lsb.queues) if it has been configured, otherwise uses the default CPU time
normalization host defined at the cluster level (DEFAULT_HOST_SPEC in
lsb.params) if it has been configured, otherwise uses the host with the largest CPU
factor (the fastest host in the cluster).
On Windows, a job which runs under a CPU time limit may exceed that limit by up to
SBD_SLEEP_TIME. This is because
sbatchd periodically checks if the limit has been
exceeded.
On UNIX systems, the CPU limit can be enforced by the operating system at the
process level.
You can define whether the CPU limit is a per-process limit enforced by the OS or a per-
job limit enforced by LSF with LSB_JOB_CPULIMIT in
lsf.conf.
Jobs submitted to a chunk job queue are not chunked if CPULIMIT is greater than 30
minutes.
Default
Unlimited
DATALIMIT
Syntax
DATALIMIT=
[default_limit] maximum_limit
Description
The per-process data segment size limit (in KB) for all of the processes belonging to a
job from this queue (see
getrlimit(2)).
By default, if a default data limit is specified, jobs submitted to the queue without a job-
level data limit are killed when the default data limit is reached.
If you specify only one limit, it is the maximum, or hard, data limit. If you specify two
limits, the first one is the default, or soft, data limit, and the second one is the maximum
data limit
Default
Unlimited