The potential for deepfakes to scam users during video calls is rising. But one company is pointing out that the AI-powered technology has an easy-to-spot flaw: It struggles to render fake faces at sideway angles.
The findings come from Metaphysic(Opens in a new window), an AI content-generating company that recently examined some of the limitations of real-time deepfakes. Through a series of tests, the company’s report(Opens in a new window) shows the deepfake technology can faithfully render a celebrity’s face over someone else's face during a video call—but only when they're facing forward.
The fakery immediately collapses once the user turns their face at a 90-degree angle. The technology will also run into trouble if the user places a hand over their face.
Metaphysic released the report weeks after the FBI warned that fraudsters have been exploiting deepfake technology to impersonate job candidates during interviews. The scammers have been doing so while applying for remote jobs that could’ve given them access to financial and confidential corporate data.
Metaphysic offers an easy way for job interviewers to spot a real-time deepfake during a video call. The company ran the demo by using DeepFaceLab(Opens in a new window), the free software behind many popular deepfake videos circulating on YouTube. The software also has a real-time version called DeepFaceLive(Opens in a new window), which can swap a celebrity’s face over your own.
Although the technology can pull off the deepfakery with impressive results, the software wasn’t designed to run the real-time face-swapping at acute angles. For example, the facial-mapping processes will accidentally generate an additional eye or eyebrow for faces that appear
Read more on pcmag.com