Unreal Engine 3 from Move is capable of processing lifelike facial movements in real time.
Point based motion capture has become the industry standard procedure to capture the movements of the human body and convert the data for use in 3D animations. Reflective markers are placed at many points of the body, as the person moves around the data is recorded and can then be transferred to the animation. Both movies and games use this technology to give characters lifelike movements and actions.
However when it comes to realistic facial work, even setups that use of hundreds of reflective dots leave the developers with rough blocky data the requires a lot of post processing.
Enter motion capture company Mova. Mova’s Contour Reality Capture System uses multiple cameras that create 100,000 polygon facial models that are accurate to within one tenth of a millimeter.
Mova founder, also the man behind Apple’s QuickTime and Microsoft’s WebTV, Steve Perlman said,
“This pushes Unreal Engine 3 to its very limit … it’s about as photo-real as you can get in real time.”
Check out the video of the Unreal Engine 3 running real time on a dual NVIDIA 8800 GTXs with SLI.
Perlman says the company has been working privately with developers for some time to adapt the system for video game use.
“People have never had this kind of data available before in a game context … their heads are spinning,” he said. “What you’re seeing right there is the result of, having time to wrap our heads around this thing and see how we’re going to use it, and yes, we can in fact get a face that looks almost photo-real — you know, not quite, but almost photo-real — running in a game engine today.”
“You can see the difference then between what’s achievable in cinema and what’s achievable right now in video games…….But next generation game machines, they’ll be able to essentially show in real time what we can do currently in non-real-time using renderers. … Next generation, you’re going to have interactive sequences where people think there’s a live person in the game.”
The Unreal Engines abilities doesn’t stop there, the contour system can also create even more detailed animations if real time processing is not necessary. Below is another video of how reality capture data can look when pre-rendered.
Perlman says that the cost of a Contour motion-capture session isn’t much higher than traditional marker-based capture session, somewhere in the region of a few thousand to a few hundred thousand depending on the length and complexity of the shot.
The real savings come in post production, Perlman explains,
“Unlike marker-based capture, which has a big manual clean-up process before you see results, with contour it’s purely computationalâ€¦.
â€œWe’ve talked to people and one of the reasons when they announce delays for complex games is because they’re fighting to try and make the faces look good. With Contour, you send the guy in, he does a shoot, and we send you a face that looks nearly perfect. It’s no longer one of the risk issues for your schedule.”
The Contour system generates so much data, Perlman says, that the full value of the rendering won’t be apparent until hardware speeds improve.
“With markers, you kind of get the resolution of what those markers are and that’s itâ€¦â€¦
“When a next-generation game system comes out, or they decide they want to do something for a feature film, you can’t really use the data. With Contour, it’s actually capturing the data at much higher resolutions than any system in the world, even for feature films, can currently use. What we do is we store that data away … and when a next generation video game machine comes out and they want the data at higher resolution, they can.”
Perlman wouldn’t reveal which companies are currently using this technology, but said he expects the first games with Contour captures could come out in 2008, depending on developer schedules. He hopes the system will be in wide use by 2009.