Setup guide
If you suspect that a render node has failed due to a job exceeding the node's memory capacity, check the
logs:
1 If you are running graphics on the render node, log in as root and open a terminal. Otherwise, just log
in as root.
2 Navigate to /usr/discreet/log. This directory contains logs of events for the Burn servers installed
on the render node. You need to view the log created at the time the server failed. Identify the Burn
log file from the time of the Burn server failure using one of the following methods:
■ If the render node has just failed, look for the following file:
burn<version>_<render_node_name>_app.log.
■ If the render node failed previously and was brought back online, look for
burn<version>_<render_node_name>_app.log.## created around the time of the render node's
failure.
3 Review the messages in the log file for entries similar to the following which may indicate that the
render node was experiencing memory problems at the time of failure.
1 [error] 8192 PPLogger.C:145 01/24/06:17:06:16.998 Cannot load video media in node
"clip17" for frame 2
2 [error] 8192 PPLogger.C:145 01/24/06:17:06:17.210 Out of memory for image buffers in
node "clip6" (76480512 bytes).
3 Increase your memory token.
4 Next, check the Backburner Server log file /usr/discreet/backburner/log/backburnerServer.log
from the time of the server failure, using the methods listed above.
5 Review the messages in the Backburner Server log file in a text editor, looking for entries similar to the
following:
1 [notice] 16387 common_services.cpp:45 01/24/06:17:06:10.069 Launching 'burn'
2 [error] 16387 common_services.cpp:37 01/24/06:17:06:48.182 Task error: burn application
terminated (Hangup)
3 [error] 16387 common_services.cpp:37 01/24/06:17:06:48.182 burn application terminated
(Hangup)
These log entries confirm that a server failure occurred on the render node. Since you know the server
failed around this time, you can deduce that the memory problem caused the server to fail.
6 Optional: Identify the workstation running the application that submitted the job, and then look at
the Batch setup, timeline segment, or clip to try and determine why the server failed. Knowing what
factors caused the render node to fail may help you to gauge what jobs your render nodes can handle.
It can also give you ideas about how to deal with this problem. Problems that cause the server to fail
due to lack of memory on a render node, usually arise due to:
■ The size of images used in a project. For example, projects using higher resolution HD, 2K, and 4K
images require more memory to store and render than SD projects.
■ The complexity of the effect sent for processing. For example, a complex Batch setup with many
layers and effects requires more memory to render than a simple Batch setup.
Addressing Memory Issues
If servers on your render nodes are failing while processing jobs, increase the amount of RAM set aside for
processing jobs. You must repeat this procedure on each render node on your network running the server.
To configure Burn to reserve a set amount of RAM for jobs:
1 In a terminal, as root: /etc/init.d/backburner_server stop.
132 | Chapter 3 Networked processing