LSF Version 7.3 - Administering Platform LSF
Supported Resource Usage Limits and Syntax
546 Administering Platform LSF
When the job accumulates the specified amount of CPU time, a SIGXCPU signal is
sent to all processes belonging to the job. If the job has no signal handler for
SIGXCPU, the job is killed immediately. If the SIGXCPU signal is handled, blocked,
or ignored by the application, then after the grace period expires, LSF sends
SIGINT, SIGTERM, and SIGKILL to the job to kill it.
You can define whether the CPU limit is a per-process limit enforced by the OS or
a per-job limit enforced by LSF with LSB_JOB_CPULIMIT in
lsf.conf.
Jobs submitted to a chunk job queue are not chunked if the CPU limit is greater
than 30 minutes.
Format cpu_limit is in the form [hour:]minute, where minute can be greater than 59. 3.5
hours can either be specified as 3:30 or 210.
Normalized CPU
time
The CPU time limit is normalized according to the CPU factor of the submission
host and execution host. The CPU limit is scaled so that the job does approximately
the same amount of processing for a given CPU limit, even if it is sent to a host with
a faster or slower CPU.
For example, if a job is submitted from a host with a CPU factor of 2 and executed
on a host with a CPU factor of 3, the CPU time limit is multiplied by 2/3 because
the execution host can do the same amount of work as the submission host in 2/3
of the time.
If the optional host name or host model is not given, the CPU limit is scaled based
on the DEFAULT_HOST_SPEC specified in the
lsb.params file. (If
DEFAULT_HOST_SPEC is not defined, the fastest batch host in the cluster is used
as the default.) If host or host model is given, its CPU scaling factor is used to adjust
the actual CPU time limit at the execution host.
The following example specifies that
myjob can run for 10 minutes on a DEC3000
host, or the corresponding time on any other host:
bsub -c 10/DEC3000 myjob
See CPU Time and Run Time Normalization on page 551 for more information.
Data segment size limit
Sets a per-process (soft) data segment size limit in KB for each process that belongs
to this batch job (see
getrlimit(2)).
This option affects calls to
sbrk() and brk() . An sbrk() or malloc() call to
extend the data segment beyond the data limit returns an error.
NOTE: Linux does not use sbrk() and brk() within its calloc() and malloc(). Instead, it
uses (mmap()) to create memory. DATALIMIT cannot be enforced on Linux applications that call
sbrk() and malloc().
Job syntax (bsub) Queue syntax (lsb.queues) Fomat/Default Units
-D data_limit DATALIMIT=[default] maximum integer KB