8

722 Chapter 8: Modifiers
channels available for morph t argets and materials.
Channel percentages can b e mixed, and the result
ofthemixcanbeusedtocreateanewtarget.
On a mesh object, vertex count on the base object
and targets must b e the same. On a patch or
NURBS object, the Morpher modifier works on
control points only. This means that the resolution
of patches or NURBS surfaces can be increased on
the base object to increase detail at render time.
A Flex mod ifier above the Morpher modifier
is aware of vertex/control point motion in the
Morpher modifier. If, for example, a jaw is
morphed to slam shut, then the Flex mo d ifier
placed above the Morpher modifier in the modifier
stack can be used to make the lips quiver to
simulate soft tissue.
For an in-depth look at the Morpher modifier, see
the tutorial "Lip Sync and Facial Expression with
the Morpher Modifier."
See also
Morpher Material (page 2–1401)
Lip Sync and Facial Animation
For lip sync and facial animation, create a
character’s head in an "at rest" pose. The head can
be a mesh, patch, or NURBS model. Copy and
modifytheoriginalheadtocreatethelip-sync
and facial-expression targets. Select the original
or "at rest" head and apply the Morpher modifier.
Assign each lip-sync and facial-expression target
to a channel in the Morpher modifier. Load an
audio file in the Track View sound track, turn on
the Auto Key button, scrub the time slider, and
view the audio waveform in Track View to locate
frames for lip sync. Then set the channel spinners
on the Morpher modifier to create key frames for
lip position and facial expression.
Teeth can either be a part of the model or animated
separately. If the teeth and head are two different
objects,modeltheteethinanopenposition,and
then apply the Morpher modifier, and create one
target with the teeth closed. Eyes and head motion
canbeanimatedafterthemorphkeysarecreated.
Mor ph Targets for S peech
Nine mouth shape targets are commonly used for
speech. If your character speak s an alien dialect,
don’t hesitate to create extra morph targets to
cover these m outh shapes.
Include cheek, nost ril, and chin-jaw movement
when creating mouth position targets. Examine
yourownfaceinamirrororputafingeronyour
face while mouthing the phonemes, if necessary, to
establish the direction and extent of cheek motion.
Set lip-sync keys by viewing the audio waveform as
well as listening to the sound as you scrub the time
slider. Many mouth-position keys benefit from
being set a frame early. Often the mouth must
assumeashapebeforetheappropriatesoundis
uttered. For the word "kilo", the "K" mouth shape
precedes the actual s ound, for example.
A, I
E