Google researchers have found a way to create video versions of humans generated from just a single still image. This enables it to do things like, generate a video of someone speaking from input text, or changing a person's mouth movements to match an audio track in a different language to the one originally spoken. It also feels like a slippery slope into identity theft and misinformation, but what's AI if not with a hint of frightening consequences.
The tech itself is rather interesting: it's called Vlogger by the Google researchers that published the paper. In it the authors (Enric Corona et al) offer up various examples of how the AI takes a single input image of a human—in this case, I believe mostly AI-generated humans—and with an audio file produces both facial and bodily movements for them to match.
That's just one of a few potential use cases for the tech. Another is editing video, specifically a video subject's facial expressions. In an example, the researchers show various versions of the same clip: one has a presenter speaking to camera, another with the presenter's mouth closed in an eerie fashion, another with their eyes closed. My favourite is the video of the presenter with their eyes artificially held open by the AI, unblinking. Huge serial killer vibes. Thanks, AI.
The most useful feature in my opinion is the ability to swap an audio track for a video with a dubbed foreign language version and have the AI lip-sync the person's facial movements to the audio track.
It works through the use of two stages: «1) a stochastic human-to-3d-motion diffusion model, and 2) a novel diffusion based architecture that augments text-to-image models with both temporal and spatial controls. This approach enables the generation of high quality videos of variable length, that are easily controllable through high-level representations of human faces and bodies,» the GitHub page says.
2. Generation of Moving and Talking People Here's an example on talking face
Read more on pcgamer.com