Previsualization is an extremely valuable tool for filmmakers working with digital effects. Animatics (rough stand-in animation) can be generated on a quick turn-around and dropped into edited scenes to check timing and continuity.
But there is a drawback – animatics are often unable to convey the true feeling of a fully-rendered shot, in terms of motion and object scale. A movement that looks OK for untextured stand-in models may look too quick or awkward once the final models are substituted and motion blur is added.
Ideally the production pipeline should be able to accommodate additional iterations at this stage, but often it’s too late to make changes. Usually the in/out points and action timing must be locked early on during 3D production to allow music and sound work to proceed.
This problem results in shots that look mis-timed or out of scale, even though the lighting is as good as it can be. For example, I think I saw many cases of this in the new Spider-Man movies, like the shot in the video below where Spider-Man jumps onto the moving car.
The timing of the jump and landing probably looked fine in animatics, but when fully lit Spider-Man seems like he’s moving too quickly; there is a feeling of lightness to his body where instead there should be solid weight. I imagine the animator really wanted to have Spider-Man hang in the air for four or five more frames, but was stuck because Spider-Man has to touch the car at frame 19, since that’s when the sound effect is going to go off, and it’s too late to get that changed.
I call this phenomenon “animatic-itis.” Wishing you could make a timing improvement, but having your hands tied by the locked-down animatic.
I’ve been guilty of this myself, by the way. The airbag-bouncing shots in Roving Mars have less-than-ideal motion because we had to lock down the bounce timing for sound effects work a week or two before we finished tweaking the animation.