User Manual

Table Of Contents
Tripod pans: Similar to a locked-off shot, there is no way to calculate which objects are
closer and which are nearer from a pan that remains centered on a locked-off tripod.
Skip the Camera Tracker node and find another solution.
No detail: Clips like green screens without tracking markers lack enough detail to track.
If you are lucky enough to be involved in the shooting of these types of shots, including
tracker markers makes it much easier to get a good track. Without detail, camera
tracking will fail and you will need to find a more manual solution.
Motion blur: Fast camera motion or slow shutter speeds can introduce motion blur,
which will make it difficult to find patterns to track. It’s worth trying shots like these to
see if there are enough details to get a good solve, but know when give up and turn to
another solution.
Rolling shutter: CMOS-based cameras sometimes introduce distortion due to the
shutter capturing different lines at slightly different times. This distortion can create
significant problems for camera tracking. Sometimes it is possible to create motion
vectors with the Optical Flow node to create new in-between frames without the
wobble distortion of the rolling shutter. Then you can use the corrected image to
connect to the Camera Tracker.
Parallax issues: When objects at different distances in a shot overlap in the frame,
the overlapping area can be misinterpreted as a corner. Having a tracker assigned to
an overlapping angle like this will cause errors as the parallax starts to shift and the
overlapping area slides. This can be solved in Fusion by removing that tracker before
running the solver.
Moving objects: It’s difficult to capture a shot where objects in the clip do not move.
People, cars, animals, or other object may move in and out of a shot. These objects
move independent of the camera movement and must be eliminated or they will cause
solving errors. You can fix these issues by masking out objects that are not “nailed
to the set.” The masks are then connected to the Track Mask input on the Camera
Tracker node.
TIP: Some shots that cannot be tracked using Fusion’s Camera Tracker can be
performed in dedicated 3D camera-tracking software like 3D Equalizer and PF Track.
Camera tracking data from these applications can then be imported in the Camera3D
node in Fusion.
Outputting from the Camera Tracker
Unlike most Fusion nodes, the Camera Tracker node has two outputs:
The primary output is a 2D view used when you are setting up the Track, refining the
camera, and performing your initial solve.
There is also a 3D output used after your initial solve for viewing the camera path
and point cloud in 3D space. This view can be helpful when you are refining tracks
to increase the accuracy of the solve and aligning your ground plane. It can be used
simultaneously with the 2D output in side-by-side views.
Note that the selection of tracks in the 2D view and their corresponding locators (in the point
cloud) in the 3D view are synchronized. There are also viewer menus available in both the 2D
and 3D views to give quick control of the functionality of this tool.
1591Chapter – 77 3D Camera Tracking