A physically-based facial muscle model for animation
He speaks: “Algorithm.” And you can just about read his lips.
The movie was created using muscle-driven physics-based animation. Other techniques might produce images that look just as real, but they are much less versatile.
The animation starts with a highly detailed model of the head and neck that was created by Eftychios Sifakis, a PhD student, and his colleagues in Ron Fedkiw’s lab at Stanford. They used data from the Visible Human project to create the model, and then morphed it to fit data obtained from both laser and MRI scans of a living subject.
To animate the model, the researchers estimated muscle activations, head position, and jaw articulation using motion captured performances of a living person—Sifakis himself. For ten minutes in front of eight cameras, Sifakis spoke a full range of phonemes with 250 markers attached to his face to get fully three-dimensional information. From this, the researchers constructed a phoneme database that described how muscles activated across a full range of possible facial movements for specific (and phonemically appropriate) periods of time. They then used these data to synthesize Sifakis’ face speaking words that were not captured on film.
The work was published in the Proceedings of the Euro-graphics/ACM Siggraph Symposium on Computer Animation in 2006. It could prove valuable not only for the entertainment industry but also for predicting the effect that facial surgery will have on expression.