Specifications

In addition to the rendering step, we also need to distort it to account for the effects of the
lenses on the Rift.
Practically speaking, the head pose is really a specialized kind of user input, and the Rift-
required distortion is part of the overall process of rendering a frame, but we’ve called them
out here as separate boxes to emphasize the distinction between Rift and non-Rift applications.
However, as we said, the design of the Rift is such that it shows a different image to each
eye by showing each eye only one half of the display panel on the device. As part of generating
a single frame of output, we render an individual image for each eye and distort that image,
before moving on to the next eye. Then, after both per-eye images have been rendered and
distorted, we send the resulting output frame to the device.
12
Let’s take a closer look at the individual steps.
1.4.1 Using head tracking to change the point of view
The first way the Rift increases immersion is via head tracking, eliminating part of the
necessary mental translation when interacting with a computer generated environment. If you
want to see what’s to your left, you no longer have to go through the process of calculating
how far to move your mouse, or how long to hold the joystick. You simply look to your left.
This is as much an instance of Natural User Interface (NUI) as Virtual Reality. NUI is all about
making the interface for interacting with a computer application or environment so seamless as
to essentially be no interface at all. Interacting with a touch-screen surface and dragging a UI
element around by literally dragging it is a form of NUI. Changing your perspective within an
artificial environment by moving your head is another.
The Rift enables this kind of interaction by integrating sensor hardware that detects spatial
acceleration on 3 axes and rotation rate on 3 axes. The 3 rotation axes and 3 acceleration
axes add up to six degrees of freedom, commonly abbreviated as 6DOF
13
. This kind of
hardware is probably already familiar to users of mobile computing devices such as
smartphones and tablets, which now almost universally include such sensors. It’s also
commonly found in some game console hardware, such as controllers for Nintendo’s and Sony’s
lines of consoles. Most commodity hardware of this kind is intended to be wielded by hand and
doesn’t have stringent latency requirements, unlike what’s desirable for VR. As such, the Rift
tracking hardware is a step above what’s typically found elsewhere, both in terms of reporting
resolution and accuracy. However, even with their high quality, these sensors alone are
insufficiently accurate to track relative changes in position over time periods of more than a
second, so the DK1 kit is limited to tracking only the orientation of a user’s head.
In the second version of the Rift development kit (DK2), this limitation has been overcome
by adding an infrared camera (separate from the Rift itself) as part of the kit. In combination
12
In the early versions of the SDK, distortion and rendering of the final output to the Rift display device had to be done by
applications. Since 0.3.x, the distortion and rendering to the device are typically handled inside the SDK, though you can
override this behavior.
13
Note that this is a slightly different usage of the term 6DOF than when it is used to describe a system that tracks both
position and orientation, since here we’re tracking acceleration and angular acceleration, each on 3 axes.
©Manning Publications Co. We welcome reader comments about anything in the manuscript - other than typos and
other simple mistakes. These will be cleaned up during production of the book by copyeditors and proofreaders.
https://forums.manning.com/forums/oculus-rift-in-action
20