Platform LSF Administration Guide Version 6.2
Chapter 16
Fairshare Scheduling
Administering Platform LSF
309
Example
Begin HostPartition
HPART_NAME = equal_share_partition
HOSTS = all
USER_SHARES = [default, 1]
End HostPartition
Priority user and static priority fairshare
There are two ways to configure fairshare so that a more important user’s job always
overrides the job of a less important user, regardless of resource use.
◆
Priority User Fairshare
Dynamic priority is calculated as usual, but more important and less important users
are assigned a drastically different number of shares, so that resource use has
virtually no effect on the dynamic priority: the user with the overwhelming majority
of shares always goes first. However, if two users have a similar or equal number of
shares, their resource use still determines which of them goes first.
This is useful for isolating a group of high-priority or low-priority users, while
allowing other fairshare policies to operate as usual most of the time.
◆
Static Priority Fairshare
Dynamic priority is no longer dynamic, because resource use is ignored. The user
with the most shares always goes first.
This is useful to configure multiple users in a descending order of priority.
Priority user fairshare
Priority user fairshare gives priority to important users, so their jobs override the jobs of
other users. You can still use fairshare policies to balance resources among each group
of users.
If two users compete for resources, and one of them is a priority user, the priority user’s
job always runs first.
Configuring
To configure priority users, assign the overwhelming majority of shares to the most
important users.
Example
A queue is shared by key users and other users. As long as there are jobs from key users
waiting for resources, other users’ jobs will not be dispatched.
1
Define a user group called key_users in lsb.users.
2
Configure fairshare and assign the overwhelming majority of shares to the critical
users:
Begin Queue
QUEUE_NAME = production
FAIRSHARE = USER_SHARES[[key_users@, 2000] [others, 1]]
...
End Queue
Key users have 2000 shares each, while other users together have only 1 share. This
makes it virtually impossible for other users’ jobs to get dispatched unless none of the
users in the
key_users group has jobs waiting to run.
If you want the same fairshare policy to apply to jobs from all queues, configure host
partition fairshare in a similar way.