Installation and Configuration Guide for Linux Workstations ®
Legal Notices Autodesk® Flame® 2014, Autodesk® Flame® Premium 2014, Autodesk® Flare™ 2014, Autodesk® Flint® 2014, Autodesk® Inferno® 2014, Autodesk® Lustre® 2014, Autodesk® Smoke® Advanced 2014, Autodesk® Smoke® HD 2014, Autodesk® Backdraft® Conform 2014 © 2013 Autodesk, Inc. All rights reserved. Except as otherwise permitted by Autodesk, Inc., this publication, or parts thereof, may not be reproduced in any form, by any method, for any purpose.
Contents Chapter 1 Flame Premium Installation and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 1 Hardware setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Typical configuration overview for Creative Finishing applications . . . . . . . . . . . . . . . . . 3 Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Audio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Command line start-up options . . . . . . . . . . . . . . . . . . . . . . . Node-locked licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Install the license server software . . . . . . . . . . . . . . . . . . . Get the unique host ID of a license server . . . . . . . . . . . . . . Request license codes . . . . . . . . . . . . . . . . . . . . . . . . .
Configure workstations for Burn . . . . . . . . . . . . . . . . Configure multicasting . . . . . . . . . . . . . . . . . . . . . Install additional fonts . . . . . . . . . . . . . . . . . . . . . Disable local Stone and Wire IO on a node . . . . . . . . . . Run multiple versions of Burn on the same node . . . . . . . License your software . . . . . . . . . . . . . . . . . . . . . . . . Two licensing scenarios . . . . . . . . . . . . . . . . . . . . Get license codes . . . . . . . . . . . . . . . . . . . . .
iv
Flame Premium Installation and Configuration 1 Prerequisties for installation ■ Root access to your system. The default root account password on an Autodesk workstation is password. ■ If you need to change your system date or time, do it before installing the application. ■ Archiving existing media on the framestore is recommended. ■ Prepare the installation media (page 31) to access the install directory.
6 Install Creative Finishing software (page 30) 7 Configure media storage (page 33). 8 License your software. If you are not on subscription use Node-locked licensing (page 55). On subscription you can use node-locked or Network licensing (page 56). Unnecessary if upgrading to a service pack of the same software version or to a service pack of the same extension. Hardware setup If you are only upgrading an existing application, you do not need to reconfigure your hardware.
Typical configuration overview for Creative Finishing applications Typical configuration Hardware setup | 3
The Z820 with the optional NVIDIA SDI2, AJA KONA 3G and 2-port GigE adapters. Optionally, your workstation can be set up with a second ATTO Fibre Channel adapter in slot 1.
The Z800 with a 2-port GigE adapter in slot 1 (top to bottom), and a Mellanox QDR InfiniBand / 10-GigE adapter in slot 7. Optionally, your workstation can be set up with a second ATTO Fibre Channel adapter in slot 1.
Video HP Z820 Video I/O The only video hardware you must provide are a sync generator, VTR, HD/SDI-ready broadcast monitor and patch panel (if desired). Some of the following steps might not be necessary depending on your hardware configuration. Connection procedure: 1 Connect the output of the sync generator to the top Ref Loop port of the AJA K3G-Box. 2 Connect the Input port of the NVIDIA SDI card (the one next to the DVI port) to the bottom Ref Loop port of the AJA K3G-Box.
The only video hardware you must provide are a sync generator, VTR, HD/SDI-ready broadcast monitor and patch panel (if desired). 1 Connect the output of the sync generator to the top Ref Loop port of the AJA K3G-Box. 2 Connect the Input port of the NVIDIA SDI card (the one next to the DVI port) to the bottom Ref Loop port of the AJA K3G-Box. 3 Connect the Fill (outer) port of the NVIDIA SDI card to the Input port of the AJA HD5DA distribution amplifier.
Connect the Discreet Native Audio hardware components to the AJA breakout box Media storage It is not recommended to use the system disk for media storage. The following can be used: ■ A UNIX-compatible filesystem on a DAS (such as an Autodesk-recommended Dot Hill or XR-series disk array). ■ A UNIX-compatible filesystem on a Network Attached Storage (NAS) based on the Network File System (NFS) protocol.
■ A SAN: infrastructure that allows multiple workstations to share simultaneous access to a central storage enclosure. When attached to a CXFS SAN declared as a standard filesystem partition to Stone and Wire, Creative Finishing workstations running the current release have shown optimal (real-time) performance with version 4.02 of the CXFS client, and the following mount options for the CXFS volume: rw,noatime,filestream,inode64. ■ USB 2.
Connecting the FC loops to the main enclosure controllers: each controller has 4 FC ports, numbered 0 to 3. The Autodesk recommended configuration uses only ports 0 and 2 of each controller. The storage can be connected to the workstation through either 2 FC loops or 4 loops. 2-loop configuration: On ATTOcards 2 micro chips handle fiber traffic to the four ports: chip A handles ports 1 and 2 and chip B handles ports 3 and 4.
■ ATTOPort 4 to Dot Hill port B2 XR 6500 The following diagrams illustrate 2-loop and 4-loops connections for XR 6500 series storage assemblies. Cable your storage exactly as illustrated to ensure proper functionality. A XR 6500 RAID enclosure supports a maximum of seven XE expansion enclosures. Configurations with two XR RAID enclosures are not supported.
12 | Chapter 1 Flame Premium Installation and Configuration
Notes: ■ In a 4-loop configuration, you need a minimum of one XE expansion enclosure attached to the XR 6500 RAID enclosure.
■ The total number of enlcosures must be an even number. XR 6412 The following diagrams illustrate 2-loop and 4-loop connections for XR 6412 series storage assemblies. A XR 6412 RAID enclosure supports a maximum of seven XE expansion enclosures. Configurations with two XR RAID enclosures are not supported.
NOTE In a 4-loop configuration with a XR 6412 RAID enclosure, you need a minimum of one XE expansion enclosure attached to the XR RAID enclosure. XR 5402 and XR 5412 The following digrams illustrate 2-loop and 4-loop connections for XR 5402 and XR 5412 series storage assemblies. XR 5402 and XR 5412 series storage assemblies support 2-loop configurations with one XR RAID enclosure, and 4-loop configurations with two XR RAID enclosures.
The XR 5402 and XR 5412 RAID enclosures support a maximum of four XE expansion enclosures.
In a configuration with two XR RAID enclosures, the number of XE extension enclosures per XR RAID enclosure must be the same. The total number of enclosures in the storage assembly must be an even number. Archiving to USB 2.0, FireWire (IEEE 1394) and fibre channel devices is supported. This includes filesystems, tape drives, and VTRs. For information on connecting a VTR, see Video (page 6).
BIOS Menu Submenu Item Value USB Floppy / CD Power Advanced Hard Drive Runtime Power Management Disable Idle Power Saving Normal Turbo Mode Disable Hardware Power Management SATA Power Management Disable Bus Options Numa Disable Device Options Internal Speaker Disable NIC Option ROM Download Disable NIC1 Option ROM Download Disable Hyper-threading Enable Slot 5 Option ROM Download Disable Slot 7 Option ROM Download Disable OS Power Management Slot Settin
Storage Storage Options SATA Emulation Boot Order Optical Drive RAID+AHCI USB Device Hard Drive Integrated SATA Power OS Power Management Runtime Power Management Disable Idle Power Saving Normal MWAIT Aware OS Disable ACPI S3 Hard Disk Reset Disable SATA Power Management Disable Intel Turbo Boost Technology Disable Processors Hyper-Threading Enable Chipset/Memory Memory Node Interleave Enable NUMA Split Mode Disable S5 Wake on LAN Disable Internal Speaker
Install Linux To prepare your system and perform a fresh install of Red Hat Enterprise Linux. 1 If reinstalling Linux on an existing system, back up all user settings, project settings, and media from the system disk to other media. The Linux installation formats the system disk, resulting in total data loss. In addition, back up the following directories: ■ /usr/discreet (for software setup and configuration files). ■ /etc (for Linux configuration files, networking, etc).
10 Configure basic network settings (page 21). 11 Configure an InfiniBand Card (page 22) 12 Install the DKU and the AJA OEM-2K firmware (page 22) 13 Configuring Storage (page 23) Configure basic network settings Login as root to edit the files described below in a text editor, and reboot the system for the new configuration to be used. You'll need the following from your network administrator: ■ A unique static IP address and host name for your system ■ The network gateway IP address.
IPADDR="192.168.1.100" NETMASK="255.255.0.0" ONBOOT="yes" GATEWAY=192.168.0.1 You'll need the following from your network administrator: ■ A unique static IP address and host name for your system ■ The network gateway IP address. ■ The subnet mask of your network. ■ DNS server IP address(es). Configure an InfiniBand card If the card was not present when you last ran the DKU installation, run it again to setup the drivers for the card.
2 Run the install script (for example from the USB device): /mnt/usbdisk/DKU-/INSTALL_DKU. When the DKU installation script completes, a warning to update the AJA card or DVI-Ramp firmware may appear and you are returned to the command prompt. If your workstation is connected to a SAN, run the install script with the --multipath parameter to install the multipath version of the ATTO driver, e.g. /mnt/usbdisk/DKU-/INSTALL_DKU —multipath.
4 Create LUNs. If you have more than one XR enclosure, create all of the LUNs individually, then create the XFS file system on them all at once. 5 Create the XFS filesystem Configure Dot Hill storage 1 Ensure the storage enclosures are connected to the workstation as documented. Connect an Ethernet cable to the Ethernet port of the top storage controller (controller A) and to an available network port on the workstation. 2 Configure your workstation's eth port to the same subnet as the storage controller.
WARNING LUN setup destroys data on the device. 3 The utility detects the number of enclosures and drives and presents you with a list of options. Choose 2 to create LUNs with a sector size of 512 bytes. This is the optimal sector size for XFS DAS (direct-attached) storage of Creative Finishing applications. 4 Choose 2-loop or 4-loop configuration. 4-loop configurations are only supported for XR 6412 and XR 6500 storage. The utility creates LUNs on your storage. This process might take a few minutes.
How to power storage on or off Power your system and storage up or down in the proper sequence. An incorrect power up sequence can mean your system does not recognize all drives. Power on a system: 1 Ensure your workstation is shut down. 2 Power up the XE expansion enclosures. 3 Power up the XR RAID controller enclosures. 4 Wait about 90 seconds for all the drives to spin up. Their lights are solid green when they are spun up. 5 Power on your workstation. Power off a system: 1 Shut down your workstation.
4 When asked if you have a 2-loop or a 4-loop configuration, select the option that applies to your storage. The XR Configuration Utility configures your storage. 5 Type x to exit the XR Configuration Utility. 6 Reboot your workstation, so that the newly-created LUNs are rescanned by the operating system. The XR Configuration Utility exits without configuring your storage if any of the following is detected: ■ An incorrect number of disks. The total number of disks must be a multiple of 12.
5 Type n to display the New partition creation menu. fdisk displays the type of partitions you can create (primary or extended). 6 Create a primary partition on the disk device by typing p at the prompt. 7 When prompted to enter a partition number, type 1 to make the primary partition the first one on the LUN. NOTE You may have to delete pre-existing partitions by entering d when prompted, and repeating step 3.
4 Create the volume group “vg00” from the physical volumes you created in the preceding step: vgcreate vg00 where is the list of physical volumes you created in the preceding step. TIP You can use the command vgremove to delete any erroneously entered volume. 5 Verify the volume was created and obtain the value of the “Free PE / Size” field: vgdisplay -v. In the output, find the line that contains the “Free PE / Size” field and write down the value of the “Free PE”.
Continue using the value calculated above as the new agsize value. ■ If the values of sunit and swidth are not equal to 0, and no warning message appears, proceed to step 4 using the agsize value displayed by the mkfs.xfs command in step 1. 4 Run mkfs.xfs again to create the XFS filesystem on the device /dev/vg00/lvol1 using the value calculated in one of the previous steps: mkfs.xfs -d agsize= -f /dev/vg00/lvol1. The filesystem is created on the storage array.
4 If your system has a customized xorg configuration, you are prompted to overwrite the file, or not. 5 Once the install has finished, logout of the root user, and login with the application user (e.g. Flame). The password null, there is no password set. 6 You can now further configure the application with the graphical Setup interface by clicking the link on the desktop. Prepare the installation media Check the release announcement to find out on what media the installers are available.
Uninstall 1 If you are logged in as the application user in KDE, log out and log back into KDE as root. 2 From the KDE menu, choose Autodesk > Remove Software. 3 Select the packages you want to uninstall in the RPM list on the left (click Select All to select all the packages), then click the arrow to move them to the RPM uninstall list on the right then click Next. 4 In the Choose folders window choose the application directories you want to remove from the /usr/discreet directory, and click Next.
Media Storage Media Storage Configures /usr/discreet/sw/cfg/stone+wire.cfg. Backburner Local Server Setting The network name of the workstation running the Backburner Manager. In a standalone setup, use localhost (the default). In a render-farm setup, enter the name of the dedicated Backburner Manager workstation. Xorg.conf Screen Selection Configures /etc/X11/xorg.conf. Configure media storage This is necessary for new installations.
5 If this is the first filesystem you are configuring for this workstation: Get the FRAMESTORE ID, e.g. grep "FRAMESTORE" /usr/discreet/sw/cfg/sw_framestore_map and use the ID value to update /usr/discreet/sw/cfg/sw_storage.cfg e.g. [Framestore] ID=myworkstation 6 Optionally Configure bandwidth reservation (page 45). 7 Restart Stone and Wire with: /etc/init.d/stone+wire restart 8 Check the filesystem is mounted: /usr/discreet/sw/sw_df.
Backburner Server, which calls the Wire daemon to carry out the task. Monitoring is embedded in the Visual Effects and Finishing application. It can also be done using the Backburner Web Monitor (optional). To set up background I/O in Visual Effects and Finishing, on a workstation with a Visual Effects and Finishing application and all Backburner components installed.
3 Disable any settings that might cause proxies to be generated or the clip to be resized by editing the project's settings in the Preferences Menu, Project Management group. 4 Open the clip library and enable the following: ■ Dual Library View ■ Show All Libraries ■ Copy on Drag 5 In the Clip Library menu, click Network. The local system is listed at the top of the network library. Remote systems are listed below it, in alphabetical order.
Keyword Setting BackburnerManagerGroup Set to the name of a group of computers on a Burn® rendering network. For example, if the name of the group is “renderfarm1”, you would set this keyword to BackburnerManagerGroup renderfarm1. Event triggers Overview You can set up your Creative Finishing software to execute custom external commands when certain events take place inside the application, for example, when the project or the video preview timing is changed by the user.
When the project is changed in the application, this example function outputs the name of the project in the application terminal. void previewWindowConfigChanged(string description, int width, int height, int bitDepth, string rateString, string synchString) This hook is called by the Creative Finishing application when the video preview timing is changed in the software. This function receives the following parameters from the application.
exportPath Export path as entered in the application UI. namingPattern List of optional naming tokens as entered in the application UI. resolvedPattern Full path to the first frame that will be exported with all the tokens resolved. firstFrame Frame number of the first frame that will be exported lastFrame Frame number of the last frame that will be exported. return Value A new exportPath. Empty strings or non-string return values are ignored, while invalid paths cause the export to fail with a path error.
Troubleshoot the filesystem This section describes some common filesystem problems and steps you can take to solve them. When troubleshooting storage or wire issues, start by verifying that Stone and Wire processes are running properly, and by checking the log files. Verifying that Stone and Wire Processes Are Running There are five processes that must be running for Stone and Wire to work: ■ sw_serverd ■ sw_probed ■ sw_dbd ■ sw_bwmgr ■ ifffsWiretapServer.
The current log file is named .log, where is the name of the Stone and Wire process or daemon. The next time Stone and Wire creates a log file for the process, it renames the previous log file by adding a number to the file name. For example, the sw_served process log file is named sw_served.log. The next time the process is launched, the first log file is renamed to sw_served.log.1.
Checking libraries for remote and lost frames... /usr/discreet/clip/stonefs/My_Project1/editing.000.desk has none /usr/discreet/clip/stonefs/My_Project1/Default.000.clib references 30 missing frames. /usr/discreet/clip/stonefs/My_Project2/editing.000.desk has none /usr/discreet/clip/stonefs/My_Project2/from_caplan.000.
rmmod celerityfc modprobe celerityfc NOTE Depending on the storage you are running, your system might not use all of the drivers listed. If your system does not use a driver listed, the commands to unload or reload the drivers will fail. You can ignore these failures. They just indicate that the driver is not required by your system. 3 Reload the Stone and Wire driver: /etc/init.d/stone+wire reload. Your filesystem should now be mounted.
NOTE The last sequence of numbers in the IP address defined by the HADDR keyword in the sw_framestore_map file does not have to match the Framestore ID. These values are often the same by default, but it is not a requirement for Stone and Wire operation. 4 Save and close the file. 5 Restart Stone and Wire: /usr/discreet/sw/sw_restart 6 If you continue to get error messages, contact Customer Support. Solving a Partition ID Conflict Each partition must have a different partition ID.
5 Type Y to confirm the operation. Invalid entries are removed from the Stone and Wire database. 6 Restart Stone and Wire: /etc/init.d/stone+wire start Control fragmentation Filesystem fragmentation is directly related to the amount of mixing and interleaving of blocks of data of different sizes, and is aggravated by multiple I/O clients concurrently writing data to the partition.
not jeopardized by requests from concurrent processes, including access from remote hosts such as Flare workstations. NOTE Bandwidth reservation policies apply only to I/O requests from Creative Finishing applications and tools. They cannot protect your storage bandwidth from I/O requests coming from third-party processes or user interactions. It is your responsibility to avoid using third-party tools with the frame storage. See Limit concurrent usage (page 45).
TotalAvailableWriteBandwidth=150 NOTE The total bandwidth parameters are estimates of the theoretical maximum bandwidth of the partition. The actual bandwidth is affected by several factors, including multiple applications trying to concurrently read or write to it. The Bandwidth Manager continuously measures partition performance and dynamically maintains the actual total available bandwidth for each partition.
In the following example, low-bandwidth values are configured for each process (300 MB/s for Flame, 100 MB/s for Flare, 10 MB/s for Wiretap and 10 MB/s for Wire). The diagram illustrates the way the Bandwidth Manger redistributes device bandwidth as the total available bandwidth decreases from 800 MB/s to 420 MB/s and then to 320 MB/s. Note how the Bandwidth Manager keeps the bandwidth for each application at the low bandwidth watermark.
Perform the steps in the procedure below to set up an optimal bandwidth reservation for the local application, as well as for remote workstations, based on your system configuration. To set up bandwidth reservation: 1 Open a terminal and log in as root. 2 Open the /usr/discreet/sw/cfg/sw_bwmgr.cfg file in a text editor. 3 Locate the [Device] section that corresponds to the standard filesystem partition (by default [Device0]), and uncomment it if it is commented out.
Reservation= [] [@] [()] [][))] where: ■ is the ID of the reservation, starting at 1 for each device. ■ represents the name of the application that needs the reserved bandwidth.
To set up bandwidth reservation for a group of applications: 1 Open the /usr/discreet/sw/cfg/sw_bwmgr.cfg file in a text editor. 2 In the [Groups] section, add a line for each group of applications you want to define. The syntax of the line is as follows: = where: ■ is the custom name of the group. The group name must not contain spaces and must not be the same as one of the predefined application names.
Use multi-threaded direct input output Most filesystems perform best when the I/O is parallelised across multiple threads/processes and sent asynchronously. This allows the device to buffer I/O operations and reorganize requests for optimal performance. Some applications perform better than others on the same storage device, based on how they perform their I/O. Applications that use single-threaded buffered I/O can be slow.
Test filesystem performance Each standard filesystem comes with its own set of tools to measure performance. XFS comes with the xfs_db command line tool for troubleshooting various aspects of the filesystem, including fragmentation. For information on using the xfs_db tool, consult the man page for xfs_db. Stone and Wire comes with a command line tool to measure filesystem performance called sw_io_perf_tool.
If you use the pen and tablet while the application is starting, the tablet will fail to initialise. Press Shift+T+Insert to initialise the tablet if it does not function during a work session. To start the software for the first time: 1 Log into your workstation and do one of the following: ■ WARNING The -v option deletes all material on the framestore. Use this option only if you have no material that you want to preserve on the framestore.
f Use a custom menu file, where is the name of the menu file. For information on custom menus, see the application help. F Force the application to install new fonts that you added to the /usr/lib/DPS/outline/base directory (and the /usr/lib/DPS/AFM directory, if you have also installed the corresponding font metric file). See the application help. h To list all start-up options, use the h option.
■ Customers not on subscription are entitled to only node-locked licenses. 1 If you are installing the application for the first time: 1 Request temporary license codes. For emergencies, you can acquire an immediate temporary emergency license using the emergency license generator at http://melicensing.autodesk.com/templicensing/. A 4-day license code is e-mailed to the address you provide. 1 Start the software you want to license.
remote workstation or on the same workstation as your Creative Finishing application. Characteristics of the Single License Server Model : ■ Of the two license server models, this configuration requires the least maintenance. ■ ■ Because all license management takes place on a single server, you have just one point of administration and one point of failure. On the other hand, if the single license server fails, the Autodesk product cannot run until the server is back online.
Get the unique host ID of a license server To get license numbers, you need the host ID of the license server, or servers, in the case of a redundant network configuration. To get the unique host ID of the license serve, in a terminal run /usr/local/bin/dlhostid. The number should look something like 25231AEF83AD9D5E9B2FA270DF4F20B1. Request license codes Request licensing codes from the Autodesk M&E Edge support portal: https://edge.autodesk.com/LicenseAssignment.
6 Click Install. 7 Start the license server (page 61). The license wizard creates the following license files: ■ ■ Workstation license: /usr/local/flexlm/licenses/DL_licenseNetwork.dat For a local license server, it creates the license server license: /usr/discreet/licserv/licenses/DL_license.dat. ■ For a remote license server, you must create the license file for the license server manually. See Create a license file for a remote license server (page 59).
DAEMON discreet_l discreet_l USE_SERVER FEATURE flare_x86_64_2011_discreet_l 2011.999 18-nov-2009 8 \ 6D7AE3402ECB46174B70 ck=47 6 Save and close the file. This file sets up the network licenses available for distribution by the license server to the Creative Finishing workstations on your network. Configure the workstation to use a set of redundant license servers To configure the workstation to use a set of redundant license servers, edit as root /usr/local/flexlm/licenses/DL_licenseNetwork.dat.
6 Repeat with /usr/local/flexlm/licenses/DL_license.dat for each workstation or node, using the same port as the one you set for the license server. Start the license server NOTE For redundant license servers, reboot each server in close sequence to properly restart the license system. To start the license server: 1 Type the following in a shell: /etc/init.d/license_server start WARNING The license server cannot start unless the license is entered correctly in DL_license.dat. Check the boot.
3 Determine the process ID number of the Creative Finishing application. 4 At the command line, type: kill where is the process number you determined in the previous step. This command terminates the Creative Finishing process that is currently executing. There may be more than one Creative Finishing process running at any time. For example, there may be one process per CPU, plus some additional processes to manage the framestore. Kill each of these processes.
Install and configure a Flare workstation 2 Installation workflows Prerequisites ■ Check the System Requirements Web page. If upgrading, check that your Linux version is still up-to-date. To determine the Linux version of Red Hat Enterprise or CentOS, in a terminal run: cat /etc/redhat-release ■ Read the Release Notes and Fixed and Known Bugs List. ■ If not using Red Hat, Prepare the CentOS disc (page 64).
Install Linux for Flare Prerequisites ■ Mouse, keyboard and graphics monitor are connected, and the graphics monitor is powered on. ■ If you are using a KVM switch, it is switched to the system on which you want to install Linux. ■ The DVD or CDROM drive is set as the primary boot device in the workstation BIOS. For information on configuring your workstation BIOS, refer to the documentation for your hardware. ■ Get the installer. Major releases are distributed on a USB drive.
2 If you did not download your distro as an iso image: 1 Insert the DVD or first CD of your CentOS distribution into the drive. You do not need to mount it. 2 In a terminal, get an ISO image of the disc by typing: dd if=/dev/ of=/. For example: dd if=/dev/cdrom of=/tmp/ Centos5.iso 3 Eject the disc.
NETWORKING=yes HOSTNAME=workstation1 GATEWAY="10.1.0.25" The GATEWAY value is used if no GATEWAY is defined in a network port’s configuration file. /etc/resolv.conf Sample snippet from /etc/resolv.conf nameserver 192.9.201.1 /etc/hosts You may need to edit the loopback setting which may look like 127.0.0.1 vxfhost.localhost.localdomain localhost by default. Optionally add hostname / IP address pairs for other workstations on your network. Sample snippet from file: 127.0.0.1 localhost.
■ On CentOS chkconfig yum-updatesd off /etc/init.d/yum-updatesd stop Install device drivers After the Linux operating system is installed, perform the following procedure to install the required device drivers for your hardware. Check the system requirements at http://www.autodesk.com/flare-systemrequirements for qualified drivers. To install hardware drivers: 1 In a terminal run init 3 to shut down the graphical environment and run in text mode.
Test your Linux environment If any of these tests fail, contact your hardware vendor, or your Linux vendor for assistance. Autodesk Customer Support does not provide support with Linux administration and configuration. ■ Confirm that you can use Linux in graphical mode at a resolution of 1900 by 1200 pixels. ■ Confirm that the proper version of Linux is installed. In a terminal, as root, run cat /etc/redhat-release The version must match one of the OS versions listed at www.autodesk.
Software is sometimes distributed as tar files. To extract from a tar file: 1 In a terminal, as root, use the md5sum command to verifty the checksum matches the md5sum listed in the checksum file. 2 Extract from the tar archive with tar -xvf filename.tar. Install Flare 1 If you need to change your system date or time, do it before installing the application. 2 Prepare the installation media (page 31).
2 In a terminal, as root. Stop Stone and Wire with the command: /etc/init.d/stone+wire stop. 3 Create one or more Managed Media Cache directories: ■ If a mount point for your storage does not exist, create one, for example: mkdir -p /mnt/SAN1. Do not use the reserved word “stonefs” as the name for your mount point directory. Mount the filesystem to the newly-created directory. To mount it at boot, update /etc/fstab.
If you expect to use Flare for very I/O-intensive tasks, it is recommended to design a storage and networking solution accordingly. Regardless of the effectiveness of the Bandwidth Manager, the direct attached storage of Creative Finishing applications (running either Stone FS or a standard filesystem) was not designed to provide the functionality and performance of a high-end SAN storage device.
[Device0] ■ Path specifies the partition's mount point. Since a partition can have several paths, represents the number of the current path, starting at 0 for each device. For example: Path0=/mnt/XYZ Path1=/usr/local/ABC ■ TotalAvailableReadBandwidth represents the estimated total reading bandwidth of the device, in megabytes per second.
In the following example, low-bandwidth values are configured for each process (300 MB/s for Flame, 100 MB/s for Flare, 10 MB/s for Wiretap and 10 MB/s for Wire). The diagram illustrates the way the Bandwidth Manger redistributes device bandwidth as the total available bandwidth decreases from 800 MB/s to 420 MB/s and then to 320 MB/s. Note how the Bandwidth Manager keeps the bandwidth for each application at the low bandwidth watermark.
Perform the steps in the procedure below to set up an optimal bandwidth reservation for the local application, as well as for remote workstations, based on your system configuration. To set up bandwidth reservation: 1 Open a terminal and log in as root. 2 Open the /usr/discreet/sw/cfg/sw_bwmgr.cfg file in a text editor. 3 Locate the [Device] section that corresponds to the standard filesystem partition (by default [Device0]), and uncomment it if it is commented out.
Reservation= [] [@] [()] [][))] where: ■ is the ID of the reservation, starting at 1 for each device. ■ represents the name of the application that needs the reserved bandwidth.
To set up bandwidth reservation for a group of applications: 1 Open the /usr/discreet/sw/cfg/sw_bwmgr.cfg file in a text editor. 2 In the [Groups] section, add a line for each group of applications you want to define. The syntax of the line is as follows: = where: ■ is the custom name of the group. The group name must not contain spaces and must not be the same as one of the predefined application names.
To setup: 1 Install the license server software (page 57) if you do not already have a license server in your network. 2 Get license codes (page 77). 3 Create a license file for a remote license server (page 59). 4 Configure nodes or workstations to get a license (page 78). 5 Optionally Change the default port used by the license server (page 60). Install the license server software The license server is a Linux daemon that provides concurrent licenses.
Keyword Description FEATURE License strings for the software and feature entitlements. To create the license server file on a license server: 1 Log in as root to the license server. 2 Navigate to the licenses directory by typing: cd /usr/discreet/licserv/licenses 3 If the file DL_license.dat does not exist in the directory, create it: touch DL_license.dat 4 Open the file DL_license.dat in a text editor. 5 Enter the information provided by Autodesk in this file.
To change the default port used by a license server: 1 Log in as root to the license server and open /usr/discreet/licserv/licenses/DL_license.dat for editing. 2 Find the SERVER line. By default, no port number is specified at the end of the SERVER line for a single license server and the license server uses default port number in the range of 27000-27009. By default, redundant license servers are set to port 27005. 3 Enter a different port at the end of the SERVER line.
80
Networked processing 3 Deploying on networked hardware By default, all of the software needed in a Creative Finishing workflow is installed on a single workstation. If the workflow requires more processing than can be handled by that workstation, some or all of the following components can be moved to other machines: ■ Backburner Manager (page 107) and Backburner (page 81) which get jobs from the workstations and distribute them.
By default it is installed on all Creative Finishing workstations, but you can install it on networked computers if you want to offload some processing. If you do not want to use the local Backburner Manager installed on your workstation, type the following commands to disable it: chkconfig backburner_manager off /etc/init.d/backburner_manager stop If you stopped the local Manager, open /usr/discreet/backburner/cfg/manager.
■ Windows 7 Professional 32 or 64 bit Installation 1 Download the appropriate file for your system from Autodesk. 2 Unzip the package and run the backburner.exe. 3 Follow the installation prompts in the installer. Start and configure the Backburner Server: 1 From the Start menu, choose Programs, Autodesk, Backburner, and then Server. The first time you start the application, the General Properties dialog appears.
To set up Backburner Server to run as a Window service: 1 Create a 'privileged' user account to give the Backburner Server access to the network mountpoints containing the needed frames, textures, scenes, storage, etc. You create a user account for use by the Backburner Server service using the the Windows Control Panel. You must create the identical account on all workstations serving as render nodes.
Use cmdjob: 1 Open a DOS shell or Linux/Mac terminal and navigate to the Backburner folder. 2 Submit a job or jobs to the cmdjob utility using the following syntax: cmdjob . You can use options, parameters, and tokens at the command line of a DOS shell or Linux terminal, as well as in a batch file or script. Options, parameters, and tokens are not case-sensitive.
Option Description %*jpX Same as %jpX, where * indicates the number of 0 padded digits to use. Restart Backburner Manager and Backburner Server Backburner Manager and Backburner Server must be running before you can submit jobs to the background processing network. They start automatically so you do not need to manually start them. If you are having problems with Backburner Manager and Backburner Server, restart them.
The Backburner network can be monitored via a Windows-based or browser-based monitor. The Windows monitor is well-suited for a setup with a single creative workstation, or the administrator workstation on a larger system. The browser-based monitor is designed for the non-administrator workstations.
Setup on Windows Before users can access the Web Monitor, you must install the following software on the workstation running the Backburner Manager: ■ Apache HTTP server (Linux/Windows/Mac) or Microsoft Internet Information Services (IIS) (Windows only) ■ Backburner Web Server Users without administrator privileges can fully manage their own jobs, but can only monitor the status of other jobs in the Web Monitor. Those with administrator privileges can manage all jobs and render nodes.
4 Right-click Default Web Site and choose Properties. In the dialog that appears, open the Documents panel and then click Add. Enter index.html in the Add Default Document dialog. This must be added to the document list for the Web Server to work. The Web Server does not work with the default index.htm entry. 5 Click OK, and double-click Default Web Site. Icons for the shared backburner and cgi-bin folders appear in the right pane. Edit the properties of backburner and enable Anonymous Access.
Job Task Normal User (Own Jobs) Normal User (Other Jobs) Admin User (All Jobs) Suspend • • • Restart • • • Archive/Restore • • Modify Settings • • Delete • • To find jobs and view their status: 1 Launch a web browser, log in to the Backburner Web Monitor, and connect to a Backburner Manager. 2 Click the Jobs tab. The Job list appears, showing all jobs on the system. Their status, progress, and other information is also displayed.
Field Description Owner The owner of the job, and the host from which it was submitted. 5 Double-click on a job of interest to view its details and settings. General Info tab Field Description Description Job description as entered when the job was submitted. Submitted By The owner of the job, and the host from which it was submitted. State The current state of the job: Priority The job priority, from 0 to 100. Zero is the highest priority. 100 means the job is suspended. Default is 50.
Field Description Max Server Count The maximum number of render nodes made available for the job, as specified when the job was submitted. Set to 0 (zero) to assign the job to all servers. Assigned Servers A comma-separated list of servers currently assigned to the job. Filter on Job Type Select this checkbox to list only the servers installed with the required adapter. Name Host name of the server. Assigned to Job A checkbox indicating whether or not the listed server is assigned to the job.
identical to resubmitting the job from the creative application, without the need for that application to be running. To delete a job: 1 On the Jobs tab, select the job of interest and choose Delete from the Action menu. 2 When prompted, click OK. The job is deleted from the system and removed from the Job list. Deleting a job completely removes it from the job queue and Backburner system. It does not, however, destroy source material or rendered results. Deleting cannot be undone.
Column Description Name Server name (host name). Description A short description of the server. Status Current server activity: ■ absent: Server is no longer seen by the manager, possibly down. ■ active: Currently working on a job. ■ suspended: On hold. ■ idle: Inactive. ■ error: Problem on the server. Perf. Index A value in the range [0–1] indicating the performance level of the render node, relative to the other servers on the same job.
2 In the Job Details page, click on the Server Assignment tab. Server Assignment tab Field Description Assigned Server Group Name of the server group, if any, to which the job was assigned. A server group is a named collection of servers. Only servers in the specified group will work on the job. Max Server Count The maximum number of render nodes made available for the job, as specified when the job was submitted. Set to 0 (zero) to assign the job to all servers.
5 Verify your changes by clicking the Refresh button. This queries the Backburner Manager for the most up-to-date information. The Assigned Servers list is updated to reflect your changes. 6 Click Close to return to the list of all servers. Delete a render node: 1 Before deleting a node, consider archiving jobs that made use of it, to preserve job details, including the nodes to which tasks were sent. 2 On the Servers tab, select the node of interest, and click the Delete button.
4 Once you are satisfied with your choices, click OK to commit the changes. Server groups you create in the Backburner Web Monitor appear as global groups in Backburner Windows Monitor. To assign a server group to a job: 1 On the Jobs tab, select the job of interest and choose Settings from the Action menu, or double-click the job. 2 In the Job Details page, click on the Server Assignment tab. 3 Choose a server group from the Assigned Server Group menu.
Area Field Description have job processing halted on the server after its first failure. Default is 3. Job Handling Time Between Retries The time before the Backburner Manager attempts to re-start a job on a server that has failed. Works in conjunction with Retry Count. Default is 30 seconds. On Job Completion Specifies what happens to a job once it has successfully completed: ■ Leave: Job is left in the job list.
Windows Monitor The Backburner Manager maintains a database, which it updates with every change of state of the render nodes. It then broadcasts the changes to every workstation running a Backburner Windows Monitor, whether the end-user is actively viewing it or not. The Windows Monitor can be launched from any Windows workstation on the network where it has been installed. The first Windows Monitor making the connection has full control over the job queue and Backburner network—that is, “queue control”.
■ Right-click a job in the Job list and choose Suspend. ■ To reactivate, select the job, then do one of: ■ Click the Activate button ■ Tap Ctrl+A. ■ Right-click the job and choose Activate. ■ From the Jobs menu, choose Activate. Modify job settings: ■ From the Jobs menu choose Edit Settings. ■ Right-click the job and choose Edit Settings. ■ Press Ctrl+J. 1 Select the job of interest in the Job list.
Item Description Server Group The server group to which the job is assigned. Only servers in the specified server group will work on the given job, unless the group is set to use idle non-group servers. Restarting a job halts all processing for the job, clears the server of all job-related temporary files (including completed tasks), and restarts the job from its first task. It is identical to resubmitting the job from the creative application, without the need for that application to be running.
■ Confirm the action. Deleting a job completely removes it from the job queue and Backburner system. It does not, however, destroy source material or rendered results. Deleting cannot be undone. If you think you may need to run the job again in the future, or examine job details, consider archiving it instead. Managing Render Nodes To view render node status: 1 Start the Backburner Monitor and connect to a Backburner Manager. The Server List area occupies the lower panes in the monitor.
Item Description CPUs The total number of CPUs installed on the system. IP address The server's IP address. This is used by the Backburner Manager to communicate with the server. Perf. Index A value in the range [0–1] indicating the performance level of the render node, relative to the other servers on the same job. A score of 1 indicates this is the best-performing server. Available Disk Space Disk space available for rendering. burn, mio, Command Line Tool, Wire, etc.
To delete a render node: 1 Deleting a node can make it more difficult to troubleshoot jobs with problems, since it will be more difficult to determine which node carried out the flawed work. Before deleting a node, consider archiving jobs that made use of it, to preserve job details, including the nodes to which tasks were sent. 2 Select the render node(s) of interest. Only nodes marked by the system as absent can be deleted. 3 Choose Delete Server from the Servers menu, or by right clicking the node.
2 Configure the behaviour of the group: Item Description Name The name of the server group as it will appear in the UI. Weight Adjusts the priority of jobs assigned to the server group. Jobs assigned to a high-weight server group are given higher priority than jobs assigned to lower-weight groups. In fact, a job assigned to a high-weight group may be rendered ahead of non-group jobs—even if the non-group jobs have higher priorities at the job level.
2 When prompted to confirm your action, click Yes. The group is deleted from the Server list. The render nodes themselves remain untouched, and can be assigned to other groups, as needed. Use the following procedures to create or delete a named collection of render nodes, called a server group, and to assign a server group to a job. NOTE Two kinds of server groups can be created, local groups and global groups. In almost all cases, you will want to create global server groups only.
4 Once you are satisfied with your choices, click OK. To assign a server group to a job: 1 Select the job(s) of interest in the Job list. 2 In the Server list, right-click the server group and choose Assign Group to Selected Jobs. ■ If nodes in the group are busy, they complete their currently-assigned jobs before working on the new job to which you have assigned them. Otherwise, they begin working on the new job immediately.
Linux setup Normally, there should be no need to configure the Backburner Manager. The most common changes—such as specifying the default mail server through which Backburner sends job-related email specifications—can also be made via the Backburner Web Monitor. To start and configure Backburner Manager: 1 In a terminal, as root: stop the Backburner Manager service: /etc/init.d/backburner stop. The Backburner Manager service on the workstation is stopped, if it was running previously.
5 Click OK to save your changes. The configuration settings are written to the Backburner configuration file, backburner.xml. 6 Restart the Backburner Manager for the changes to take effect. You can set up the Backburner Manager to run as a Windows service so that it starts with the workstation's operating system and runs in the background. When running as a service, no GUI is presented—events are logged to the log file only.
Field XML Element(s) Description Use Server Limit and The maximum number of Render Nodes that will be allocated for a specific job. This feature can override the server limit settings in some applications. For information, see the application's Advanced Settings Dialog. Use Task Error Limit and The number of times a Render Node retries a task before suspending it.
on a network file server, called backburnerJobs. The Win32 job path would be set to \\fileserver\backburnerJobs and jobs you submit placed on the file server. Job path settings Field XML Element(s) Description Use Jobs Path When enabled, defines job location using the Win32 or UNIX paths. This tells the Render Nodes to get the job files from this location, minimizing the file I/O traffic on the Manager workstation. Win32 Path The Windows file path where jobs are located.
Burn Architectural overview Architectural overview Burn is a Linux-based network processing solution. Components ■ Render node: a computer running Burn. ■ Does imaging processing which frees a workstation for more creative tasks. ■ Render nodes without GPU-accelerated graphics cards cannot process jobs that require a GPU (such as floating point jobs). They can only process jobs in software mode, using the OSMesa API.
4 Upgrade the Creative Finishing workstations to the same version as the version of Burn you are about to install. Each version of Burn is compatible with only one version of Autodesk Creative Finishing applications. 5 Install the Burn software on each node. 6 Run the software (page 125). Install the Smoke for Mac distribution of Burn Two distributions of Burn cannot be installed on the same node. However, either distribution can process jobs sent from a Mac or Linux product, as long as it is licensed.
The default root password for a Linux installation on a node is password. Prepare the CentOS disc Before installing CentOS distro for non-Autodesk hardware, you must add the Autodesk kickstart file to the DVD or first CD of your distribution so the Linux installer to install some packages. The custom Autodesk DVD of Red Hat Enterprise Linux for Autodesk hardware already contains the Autodesk kickstart file. To copy the kickstart file to the disc.
■ The network gateway IP address. ■ The subnet mask of your network. ■ DNS server IP address(es). /etc/sysconfig/network Sample snippet from /etc/sysconfig/network. NETWORKING=yes HOSTNAME=workstation1 GATEWAY="10.1.0.25" The GATEWAY value is used if no GATEWAY is defined in a network port’s configuration file. /etc/resolv.conf Sample snippet from /etc/resolv.conf nameserver 192.9.201.1 /etc/hosts You may need to edit the loopback setting which may look like 127.0.0.1 vxfhost.localhost.
Configure an InfiniBand card To use the render node in an InfiniBand-connected background processing network, it must be equipped with an InfiniBand network adapter. The precompiled QuickSilver (QLogic) InfiniServ 9000 HCA adapter drivers for the Red Hat Enterprise Linux kernel are included in the dist/ib subdirectory of the installation package. If you are using CentOS, you need to manually compile the InfiniBand driver for your version of the Linux kernel.
4 Reboot the system. Prepare the installation media Check the release announcement to find out on what media the installers are available. Major releases are distributed on a USB device. To mount a USB device: ■ Attach the device. Log in to the terminal as root. On Red Hat 6, change directory to the USB mount point at /media/AUTODESK/. On Red Hat 5, continue with the following steps. ■ Use the dmesg command to output something like sdf: sdf1 to list a recent device connected.
2 From the installation directory. Run the installation script: ./INSTALL_BURN to install Burn and Backburner Server. 3 If you are installing the Smoke for Mac OS X edition of Burn, you are prompted to enter the license server name or address and the license server MAC address. For more information on licensing Smoke for Mac applications, see the Smoke Installation and Licensing Guide.
Server Group [BackburnerManagerGroup] Specifies a server group (a preset group of render nodes) used to process jobs submitted by the application. By default, Backburner Manager assigns a job to all available render nodes capable of processing it. If you have a dedicated group of render nodes for processing jobs, set the value to the name of the render node group. See the the Backburner User Guide for information on creating groups.
Install additional fonts During the install, the same fonts that are installed by default with your Creative Finishing application are installed. However, if you installed additional fonts on the workstation that are not provided with your application, you must also install those fonts on each render node. Contact your third-party font supplier(s) for information about Linux support for those fonts. Ensure any 3D Text fonts used with Action nodes in the Batch setups you submit to Burn are installed.
Managing Multiple Burn Servers on a Render Node You can have multiple versions of the server installed on a render node to handle jobs from different Burn clients. For example, you can run the Burn 1.6 and current version servers to allow the same render node to handle jobs from the Burn 1.6 client used by Flame 9.0 and Smoke 6.5, as well as jobs from other Autodesk applications that use the latest version.
6 Start the previous version of Burn with /etc/init.d/burnclient start License your software You can install the software without a license, but you must license it before you can use it. A “floating” license system is used, made up of the following components. 1 License Server: A Linux daemon that provides concurrent licenses to computers on your network as needed. 2 Licensing clients: Each computer on the network that requests a license from the License Server.
For a redundant network license server configuration, you must install the license server software on all three workstations selected as license servers. To install the license server, as root, run on the ./INSTALL_LICSERV from the software installation directory. Create a license file for a remote license server After you receive your license codes, edit the /usr/discreet/licserv/licenses/DL_license.
Configure nodes or workstations to get a license Create a licence file on each computer so that it can get a license from the license server. Do this even if the server and client are on the same machine. 1 As root, Log in as root, open for editing /usr/local/flexlm/licenses/DL_license.dat. If it doesn't exist yet, create it. 2 Copy the SERVER, DAEMON, and USE_SERVER lines into the license file.
Run the software Overview Once GUID-25446F29-3FA9-4AB4-878F-972996FDC947 is installed and licensed, send jobs from your Creative Finishing applications to the background processing network. The background processing network refers to all the nodes on the physical network that are used for background processing. The following procedures provide a general overview for doing background processing and assumes that the network is configured properly, including the TCP/IP settings.
2 Troubleshoot the background processing network (page 126). 3 Review the Burn and Backburner logs from render nodes on the network. 1 Create a list of render nodes from which Burn and Backburner logs should be collected. On your Creative Finishing workstation, log in to the account for your Autodesk application and open a terminal. Run /usr/discreet//bin/GATHER_BURN_LOGS Run it with -h for usage.
If your network supports jumbo frame switching, test if jumbo frames can be sent between the workstations and render nodes: 1 On a workstation or render node, open a terminal and run ping using the -s option to set the packet size used for network communications. Type: ping -s 50000 where is the hostname or IP address of the workstation or render node you are trying to reach.
2 Open the /etc/amd.conf configuration file in a text editor and change /net to /hosts. So the file contains the following: #DEFINE AN AMD MOUNT POINT [ /hosts ] 3 Save and close the file then restart the amd daemon: /etc/init.d/amd start Configure the NFS and amd services to start automatically By default, the NFS and amd services are set to start automatically on workstations and render nodes. Perform the following procedure to check these services, and reconfigure their startup mode if necessary.
Verify Stone and Wire connectivity from the background processing network Render nodes on a background processing network access frames on storage devices attached to the workstation using the Wire network. To ensure these storage devices are available to the render node: 1 Log in as root to a render node on the background processing network. In a terminal, view all storage devices available to the render node: /usr/discreet/sw/tools/sw_framestore_dump.
3 View the DL/usr/local/flexlm/licenses/DL_license.dat file to check that the render node is licensed for Burn. It should like something like the below. If it doesn't, contact customer support. SERVER exuma-001 DLHOST01=886C2B75E8E57E4B03D784C3A2100AC0 DAEMON discreet_l discreet_l USE_SERVER 4 Repeat the above for the remaining render nodes on the background processing network. If the License Server for your network is running on a render node, make sure you perform this procedure on this node as well.
To test render node hardware for Burn, log in to the render node as root and open a terminal. Run: /usr/discreet//bin/verifyBurnServer The verifyBurnServer script checks the hardware of the system to ensure it meets the requirements for render nodes, and displays the results.
If you suspect that a render node has failed due to a job exceeding the node's memory capacity, check the logs: 1 If you are running graphics on the render node, log in as root and open a terminal. Otherwise, just log in as root. 2 Navigate to /usr/discreet/log. This directory contains logs of events for the Burn servers installed on the render node. You need to view the log created at the time the server failed.
2 In /usr/discreet/burn_/cfg/init.cfg uncomment the MemoryApplication keyword. This keyword sets the amount of RAM in megabytes (MB) to be reserved for jobs. This keyword is disabled by default so Burn can dynamically adjust the amount of RAM used for each job based on the resolution of the project. When you enable this keyword, Burn reserves the same amount of memory for each job regardless of the project's resolution.
Wiretap Gateway is a Wiretap server that exposes any mounted standard filesystem as a Wiretap hierarchy of directories, files, and clip nodes, and streams them as raw RGB to local or remote Wiretap clients, such as WiretapCentral. If Wiretap Gateway is installed on a Mac equipped with a RED ROCKET card, it can use the card to improve the speed of decoding and debayering R3D files. Wiretap Gateway machines in your network are labeled as such in the WiretapCentral network tree, or in the Lustre file browser.
Autodesk Wire This service enables high-speed transfer of uncompressed timelines, clips, and libraries between workstations, on industry-standard TCP/IP and InfiniBand networks, preserving all metadata. Media I/O Adapter The Media I/O Adapter is a Backburner processing engine that reads media from a storage device or Wiretap server, processes it, and then writes it to a storage device or Wiretap server. Install and license Wiretap Gateway on a dedicated system See Prepare the installation media (page 31).
3 Carefully add the code to the license file /usr/local/flexlm/licenses/DL_license.dat. 4 Save the license file and restart Wiretap Gateway with /etc/init.d/wiretapgateway restart. Installing and and Licensing the Wiretap Gateway Software Included with Smoke for Mac OS X Install and license the Wiretap Gateway included with Smoke for Mac OS X. This version is for Smoke for Mac only. Install, configure and license Smoke for Mac OS X before you install and license the Wiretap Gateway.
■ LimitDirs=/mnt. Proxy Quality for RED Footage The LowresDebayerMode parameter sets the proxy quality level for viewing RED (R3D) media. Legal values: Full, Half Premium, Half Good, Quarter (default), Eighth. Slave processes To improve real-time playback of RED media, Wiretap Gateway can spawn multiple slave processes that increase performance without requiring additional licenses. This is set with NumLocalSlaves. The default setting is 4. NOTE Do not use slaves in conjunction with a RED ROCKET.
Configure WiretapCentral Setting Up User Access Control Access control options ■ By default, no user name or password is needed to use WiretapCentral, and all jobs submitted from it to Backburner are owned by the user apache. As a result, all users can perform operations on any WiretapCentral job on the Backburner network, including suspending, activating, and deleting jobs submitted by other users. ■ You can assign the generic user “apache” administrator privileges for Backburner.
■ OS X: htpasswd -D /etc/apache2/auth/backburner.auth Step 3 (Optional): Giving Specific Users Administrator Privileges Users without administrators privileges can perform operations on the jobs they themselves submit, but can only monitor other jobs on the Backburner network. Users with administrator privileges can actively manage all jobs and render nodes. Administrator privileges are assigned in the Backburner configuration file.
Test the installation Wiretap Gateway: 1 Access the file browser in Lustre, the Network panel in a Creative Finishing application, or open WiretapCentral in a Web browser: http:///WiretapCentral 2 Locate the Wiretap Gateway system in the list, and make sure the label “Gateway” or “Autodesk Wiretap Gateway Server” appears next to the system name.
For example, Lustre running on a Windows workstation can work with an Autodesk Creative Finishing product's soft-imported clip on a SAN or NAS. For the Windows workstation, the syntax of the path to the media files may resemble: N:\myclips\clip1\frame1.dpx On a Linux workstation, the path to the same media files may resemble: /CXFS1/myclips/clip1/frame1.
Creating a Host/Path Rule for Host-to-Host Translation Create a host/path rule to translate the path syntax used by the source workstation (the workstation running the Wiretap server) to the path syntax used by the destination workstation. The syntax of the host/path rule is as follows:
You must enter a value for each attribute. See the following table for a description of the valid values for each attribute. Attribute Description group name Identifies the name of the group. Create a group name of your choosing. Each group name must be unique. Use the value of this attribute in a host-to-host rule to map all members of the group to the same storage mount point. host name Identifies the name of a host that is in the group. os This attribute is optional.
All hosts running the same operating system must mount directories using exactly the same syntax. For example, all Windows workstations must mount the NAS on the N:\ mount point to use the same path translation rule for the NAS. NOTE Platform names must be unique and must not conflict with host names or group names. The syntax of the platform rule is as follows:
Option Description -p Specifies the path on the Wiretap server host to translate. -f Specifies the file containing the paths on the remote host to translate to the path on the local host, delimited by new lines. -H Specifies the destination host name. The default is localhost. -O Specifies the destination operating system (Linux, Windows NT, Mac OSX). NOTE Either -p or -f must be specified, but not both.
■ If you see only some of the Wire hosts (as opposed to all or none), check that each framestore has a unique Framestore ID. 2 Repeat this procedure on each Wire host. Using ping to Test Network Communication Try to ping your local host from a client machine. If this works, ping all other machines that should be accessible through Wire. : 1 Type the following command: ping . 2 If ping fails, try using the machine's IP address (for example, 172.16.100.23) instead of its hostname.
Using sw_ping to Test Network Performance Use the sw_ping command to test network performance. For more significant results, run the test 100 times.: 1 Start sw_ping: /usr/discreet/sw/sw_ping -framestore -r -w -size -loop Option: Description: -framestore Is the name of the framestore to ping. -r Reads a buffer from the remote framestore. -w Writes a buffer to the remote framestore (non-destructive).
Checking the Status of Network Interfaces If you continue to have problems with your network, you should verify that your network interfaces are up and running: 1 Run: ifconfig ■ If your network interface is up and running, an “up” appears in the broadcast report for the interface. The report includes a line similar to the following: UP BROADCAST RUNNING MULTICASTMTU:1500Metric:1 ■ If your network interface is not up and running, check the connections on your network card.
2 Enter your user name and password. The defaults are admin / admin. The Summary page appears. 3 Click Ports in the menu at the top. The Ports page appears, displaying an overview of the switch. Connected ports are displayed in green. 4 Click a port to view information and statistics on it. If you have ports with DDR connections that appear to be running at SDR speed (2.5 Gbps instead of 5 Gbps), unplug the cable and then plug it back in. The connection should run at normal DDR speed afterwards.
You can also run ShotReactor on a remote server. 1 Install Linux on the server you plan to use as the ShotReactor and connect it to your local network. After installing the Linux RedHat version that matches your hardware (version 4, 5 or 6) on the ShotReactor server, configure the IP of the ethernet port that connects ShotReactor to your network. The address you choose must not conflict with any of the other IP addresses on the network.
Lustre Background Rendering During background rendering, a shot on the timeline is rendered by a background rendering network. This is different from Shot Reactor, which renders shots on a shot-by-shot basis as they are colour graded to enable improved playback performance. Background rendering in Lustre is done with Burn for Lustre, also known as the Lustre Background Renderer. This application is specific to Lustre and provides asynchronous background processing of Lustre render jobs.
/etc/init.d/browsed_ condrestart Restarts BrowseD if it is already running. 2 Configure the init.config file for all machines that will use the BrowseD server to access centralized storage. Username The administrative user on the BrowseD server. Password The password for Username. Port All computers on the BrowseD network must use the same port to communicate. Set to 1055, the default. If configuring a render node or a workstation running on a GigE network, set this as 1044.
rendering nodes are connected over a dedicated background TCP/IP network. Render nodes can access media through NFS mount points, or by using the faster and recommended BrowseD service. See Configure Lustre BrowseD (page 151). You can have up to eight render nodes on the background rendering network. Background rendering components Lustre The client. Lustre rendering jobs are submitted for background rendering through the Render > Backburner menu.
render nodes if you are using BrowseD for background rendering. See Remote Rendering with Burn and Wiretap in the Flame User Guide. Share the storage for rw access from background render nodes To allow read and write access, the storage must be exported from the system to which it is connected. This makes it possible for the background rendering components on the network to remotely mount the storage system. NOTE Skip this section if you are using BrowseD.
Install Linux on nodes Render nodes purchased from Autodesk ship with the correct Linux distribution. If you did not purchase your node from Autodesk, get your own 64-bit distribution of Red Hat Enterprise Linux Desktop 5.3 with Workstation option, customize it using the Autodesk kickstart file, and install it. The kickstart is used to install the packages required for Burn, some of which are not installed as part of a general Linux installation.
Install and Configure Burn for Lustre on render nodes ■ Install Burn for Lustre. ■ Add the IP address of the machine where Backburner Manager is installed to the manager.host file on each render node. ■ Start the Backburner Server on each render node. ■ License Burn for Lustre. Install Burn for Lustre on render nodes When you install Burn for Lustre, the necessary Backburner components are also installed on the render node.
SERVER burn-01 DLHOST01=25231AEF83AD9D5E9B2FA270DF4F20B1 VENDOR lustre USE_SERVER 3 Save and close the file. Configure backburner server to detect Backburner Manager Backburner Server needs to be able to detect the location of Backburner Manager to provide status information concerning the render jobs: 1 On the Backburner Manager system, open a terminal and log in as root. 2 Determine which IP address the Backburner Manager workstation uses to connect to the network.
2 Select your project in the Project drop-down list, and click Edit. 3 In the Project Settings menu click Network Rendering, then click Backburner. 4 Enter the location of the Project Home, Scans Full Home, Scans Half Home, Renders Full Home and Renders Half Home, as seen from the Linux render nodes. You only need to enter those locations that are defined for the project in the local project setup, located in the Setup > Project menu.
Index A AMD starting daemon 147 amd (Linux automounter) 127 application entering license codes 55 installing 69 licensing 55 starting 54 uninstalling 32, 76 audio Discreet Native Audio 7 hardware components 7 wiring workflow 7 audio converter 7 AutoFS starting daemon 147 B Backburner 112 Backburner Manager 112 detecting in Lustre 157 detecting with Backburner Server 157 installing 117 overview 153 Backburner Monitor managing jobs on the Distributed Queueing System 125 overview 153 Backburner Server 112 det
Distributed Queueing System activating Render Nodes 118 background rendering 125 checking mount points 127 components 112 defined 112 managing and monitoring jobs 125 submitting jobs to 125 testing components 130 troubleshooting 126 verifying Stone and Wire connectivity DKU version 22, 116 E error logs 40 error reporting errors logs 40 verbose 41 exports file 128 filesystem disk usage 43 problems 40 floating licenses background rendering 156 fonts 120 installing 55 framestore ID mismatch, resolving 43 G
N S NAS SAN using BrowseD 151 network interfaces status 148 network performance sw_ping 147 network tests ping 146 networking multicasting, Linux 21, 22, 65, 66, 114, 115 NFS starting daemon 147 using BrowseD 151 scanning storage hardware 42 server groups creating 106 global server groups 106 local server groups (3ds Max) 106 setting export permissions 128 ShotReactor configuring, workflow 149 software requirements for Burn for Lustre 154 Sparks, background rendering with 112 standard filesystem testin
U undo buffer clearing 43 uninstalling the application 32, 76 V verbose error reporting 41 video I/O, wiring 7 video wiring 7 Visual Effects and Finishing applications submitting jobs to server groups 106 VTR connecting 7 W Wire 112 clip library access, verifying 147 problems 145 Wire troubleshooting network interface status 148 Wiretap configuring path translation 141 162 | Index path translation 140 Wiretap path translation configuring 141 group rule 142 host/path rule 142 operating system rule 143 s