A new patent filed by Meta showcases technology that could lead to more realistic lip-syncing and facial animation for video games and VR applications. The creation of NPCs and player avatars with these more lifelike facial movements opens the door to more realistic, engaging, and immersive games and experiences, including in VR games.
Even as graphics have advanced in video games over the decades, allowing the creation of nearly photorealistic characters in recent years, portraying realistic speech and facial expressions for those characters has continued to provide a challenge to developers. Gamers' brains are accustomed to seeing and reading the faces of other people, so even the best facial animation on a video game character can easily cross into uncanny valley territory, leading to characters looking “off” or even downright disturbing. Meta’s patent could allow for the generation of realistic lip-syncing and facial expressions on the fly, saving time for developers and increasing immersion for players.
Facebook Working on System to Allow Users To Scan Their Clothes into the Metaverse
The patent details a variety of methods for translating a user’s speech into realistic lip-syncing and facial animation, giving developers flexibility in how they choose the use the system. One method outlined in the patent involves using a dataset of audio and video recordings of multiple people as they read 50 different phonetically balanced sentences, with the system tracking their facial expressions as they read through each. By tracking how individuals’ faces and mouths move as they read through each sentence, the system can then pull from these movements and blend them together to realistically animate a character, even including
Read more on gamerant.com