Lip Sync Animation

Based on instruction derived from research, it seems that completing at least the base of the lip sync should be done first, before the body. This is due to the body’s constant movement, which will make completing the lip sync animation more difficult. I learned about and implemented Animation Layers into my workflow. This allowed me to separate the different parts of my animation, and turn them on and off according need for ease and focus. I believe that this method will be especially useful when I want to see different parts of animations separately. For example, my layers were separated into main body poses, breathing, facial expressions and lip sync. I also experimented with duplicating the layers (and therefore also the animations) towards producing a more extreme animation style, however, the outcome of these experiments were not suitable.

Lip Sync Chart – Phonemes

Figure 1. Lip Sync Chart. Grouped based on similar mouth shapes.

See figure 1, I collected mouth shape imagery and grouped them based on sounds and similar mouth shapes. In doing this, I am starting to collect primary and secondary references to apply as visual aids for when I complete the lip sync animation. The timing for the animated dialogue will be referenced from the audio itself as well as my actor’s video version of the dialogue line. On top of watching tutorials. These methods will maintain the application of advanced techniques that are necessary when completing this animation. Therefore, ensuring that I follow proper procedure of professional animation techniques rather than my intuition/ impulse for how I want the animation to look (my methods may be not realistic and time consuming since I tend to focus on details too much). I will attempt to focus on implementing fundamental animation principles instead.

Figure 2 is a quick organisation on the lip sync dialogue where the main character will actually be seen, as opposed to when I cut away from the dialogue. Or if it is Grace talking.

Figure 2. Which lines will need lip sync.
Figure 3. Breakdown of dialogue lines into sounds.

Furthermore, you can see the application of this in figure 3, how I have organised identifiers within my dialogue based on research I made about lip sync animations. The principles outlined that first, key sounds should be identified, as well as the kind of sound that is made. After all, there is a difference between the shape of the mouth that the sounds “Aaa” and “Ah” make. It is important to understand which is before animating. I used an online converter to help with this, however, I changed it accordingly where necessary. Most of this seems correct and more accurate than I could have done it. The blue indicates the sound breakdowns of the dialogue that is in black. The flow of the dialogue was roughly mapped out in green, though this will certainly be adapted according to the body when I animate them together. Finally, emphasised words of the dialogue lines were highlighted in red. These show where I imagined the most strengthened words would be, and strongest emotional subjects. In the animation, these words should have the largest mouth shapes and held poses. As I have often seen other animators do in animation movies.

Breaking Down the Dialogue Words

Figure 3. (Animates, 2019)

After looking though many video tutorials, and research books, I have decided on the method to complete my lip sync animation. I also took into consideration that fact that I have the phonemes ready to animate with from my auto face Advanced Skeleton rig.

Note to self: most research source state that you should hold the ‘M’ pose for an extra frame and when you have finished animating you should move your whole animation forward a frame or two. Since we make the mouth shapes slightly faster than the sound that come out.

The method consists of nine primary mouth shapes based on the most basic groups of sounds that a person makes when talking. While, I already have my phonemes ready in the rig, this is a great thought process to have. To prioritises the use of these similar mouth shapes for all of the mouth sounds. The 9 mouth shapes consist of:

  • N
  • M, B, P
  • F, V
  • L, T, H
  • OU
  • CH
  • OH
  • E
  • AH

The expressions are premade, just like my phonemes and then keyed into appropriate places. The jaw movement, and mouth shape is adjusted and then the whole animation is refined. I have found research on lip sync which suggests that this is the incorrect method, since it can be quite mechanical. However, I believe this is a valuable method, that I can make work quickly. Applying the phonemes to the blocking animation stage, then refining the lip sync and character’s emotions with the body movements together. Emphasis and exaggeration to appropriate parts.

Ways to add personality and emotion:

This section relates to how I can improve my animation by outlining personality and emotion in the significant moments.

  • Sneer
  • Unsymmetrical face e.g. one side of the mouth, eyebrows, eyelid close or cheeks.
  • Upper face= thoughts.
  • Lower face= feelings.

Phoneme Play Blasts

See the next video, figure 4, for the lip sync animations where I implemented the rig phonemes.

My tutor feedback was very positive, he suggested that this level of animation is enough for lip sync at a distance. Since it will not need much detail further way. This puts my work progress into perspective, at a much greater depth of perspective. Since, I tend to elaborate in my work too deeply, to perfect certain aspects that bother me. However, with this feedback, I feel a sense of security that I can move on from this stage to complete my blocking animations (of the whole body) sooner rather than later. He also mentioned that there is good anticipation before the dialogue lines, but I should add more after the spoken words as well – hold the position on an open mouth.

References

Animates, S. (2019). HOW TO ANIMATE LIPSYNC – 3D Animation Tutorial. [online] Youtube. Available at: https://youtu.be/pcPUp8VKjMEhttps://youtu.be/pcPUp8VKjME [Accessed 4 Dec. 2022].‌

Leave a Comment