Setup guide

Server Group [BackburnerManagerGroup] Specifies a server group (a preset group of render nodes) used to
process jobs submitted by the application. By default, Backburner Manager assigns a job to all available
render nodes capable of processing it. If you have a dedicated group of render nodes for processing jobs, set
the value to the name of the render node group. See the the Backburner User Guide for information on
creating groups.
Group Capability [BackburnerManagerGroupCapability] Enables or disables the submission of jobs that
require a GPU (such as floating point jobs) to the background processing network. Configure this according
to the GPU capabilities of the nodes in your background processing network:
Software: none of the nodes in your background processing network is equipped with a GPU. The
application will not send jobs that require a GPU to the background processing network, but only jobs
that can be processed in software mode (using OSMesa) by the render nodes.
GPU: all the nodes in your background processing network are GPU-enabled. The application will send
all jobs to the GPU-equipped nodes in the background processing network, even if some jobs do not
specifically require a GPU node. The GPU-equipped render nodes will render jobs that require a GPU, as
well as OSMesa jobs. If your background processing network also contains nodes without a GPU, and
this setting is used, , all jobs are sent only to GPU-equipped render nodes, and the nodes without a GPU
are never used.
Hybrid: your background processing network contains a mix of nodes with GPUs and without GPUs.
The application will send all jobs to the background processing network, and Backburner Manager will
distribute each job to the appropriate type of render node. Jobs that require a GPU are sent only to
GPU-equipped nodes, while jobs that do not require a GPU are sent to any available render node (GPU
or non-GPU), to be processed in software mode. Use this setting only if you are sure that at least one
node in your background processing network is equipped with a GPU. Attempting to submit a job that
requires a GPU to a background processing network with no GPU-enabled nodes results in the job being
stuck in the queue indefinitely.
Configure multicasting
Enable multicasting in Stone and Wire. Unnecessary if upgrading an existing installation of Burn.
1 Open the /usr/discreet/sw/cfg/sw_probed.cfg configuration file on the render node in a text editor.
2 Set SelfDiscovery to Yes. Now sw_probed runs in self-discovery mode and it will automatically probe
the network for other systems. This is set to Yes by default when Stone and Wire is installed on the
render node.
3 The Scope parameter defines the scope for the multicast. This parameter setting must be the same for
all machines on your network.
For networks with one subnet, set to LinkLocal.
For networks with subnets, use a value that is appropriate for your requirements and router
configuration, either OrganizationLocal, or GlobalWorld.
4 If the workstations and nodes in your facility are on separate networks connected through routers, use
the ttl parameter in the file to specify the number of router hops for a multicast. Transfers across
multiple routers may cause bottlenecks at network bridges, especially with jobs involving film frames.
Using the ttlparameter may reduce multicast-related traffic and improve general network performance
in your facility. Consult your network administrator for guidance on setting the appropriate values for
your network.
5 Save and close sw_probed.cfg then restart the sw_probed daemon: /etc/init.d/stone+wire restart.
Burn | 119