Epic Games, the company that makes Unreal Engine, recently released a substantial update to its MetaHuman character creation tool, which for the first time allows developers to import scans of real people for use in real-time applications. The improvements offer a glimpse of a future where anyone can easily bring a realistic digital version of themselves into VR and the metaverse in general.
Epic’s MetaHuman tool is designed to make it easy for developers to create a wide variety of high-quality 3D character models for use in real-time applications. The tool works like an advanced version of a ‘character customizer’ that you would find in a modern video game, but with much more control and reliability.
In the initial release, developers were only able to formulate their characters from a selection of preset faces, then use tools to customize the character’s appearance to their liking. Of course, many experimented with creating their own likeness, or that of recognizable celebrities. While character creation with MetaHuman is fast – compared to manually creating a similar model from scratch – achieving a specific person’s likeness remains a challenge.
But now the latest release includes a new ‘Mesh to MetaHuman’ feature that allows developers to import facial scans of real people (or 3D sculptures created in other software) and then have the system automatically generate a MetaHuman face from the scan, including full rigging for animation.
However, there are still some limitations. First, hair, skin textures, and other details are not generated automatically; at this point, the Mesh to MetaHuman feature focuses mainly on tuning the general topology of the head and segmenting it for realistic animations. Developers still need to provide skin textures and do some extra work to match hair, facial hair, and eyes to the person they want to match.
The MetaHuman tool is still in early access and intended for developers of Unreal Engine. And while we’re not quite at the stage where everyone can just take a few pictures of their head and create a realistic digital version of themselves, it’s pretty clear that we’re moving in that direction.
However, if the goal is to create a fully believable avatar of ourselves for use in VR and the metaverse in general, there are still challenges to be solved.
Just generate a model that looks like you is not quite enough. You also need the model to move like you†
Each person has their own unique facial expressions and mannerisms that are easily recognizable to those who know them well. Even if a face model has been manipulated for animation, unless it has been manipulated in a way specific to your expressions and able to draw from real examples of your expressions, a realistic avatar will never look like you when it is in motion.
For people who don’t know you, that’s not so important because they don’t have a baseline of your expressions to draw from. But it would be important for your close relationships, where even small changes in someone’s usual facial expressions and mannerisms can imply a range of ailments, such as being distracted, tired, or even drunk.
In an effort to address this particular challenge, Meta (not to be confused with Epic’s MetaHumans tool) has been working on its own system called Codec Avatars, which aims to animate a realistic model of your face using fully believable animations unique to you-live.
Perhaps in the future we will see a fusion of systems such as MetaHumans and Codec Avatars; one to easily create a lifelike digital avatar and another to animate that avatar in a unique and believable way you†