Deepfakes have emerged as a major talking point this year as a malicious side-effect of artificial intelligence (AI). Many bad actors have used the current boom in this space to use AI editing tools to create fake images of people and institutions. Multiple reports have emerged of criminals creating fake nudes of people and then threatening them to post these photos online if the victim did not pay them money. But now, a group of researchers at the Massachusetts Institute of Technology (MIT) have come up with a tool that can help combat this problem.
According to a report by MIT Technology Review, researchers have created a tool called PhotoGuard that alters images to protect them from being manipulated by AI systems. Hadi Salman, a contributor to the research and a PhD researcher at the institute said, Right now “anyone can take our image, modify it however they want, put us in very bad-looking situations, and blackmail us…(PhotoGuard is) an attempt to solve the problem of our images being manipulated maliciously by these models”.
Traditional protections aren't sufficient for identifying AI-generated images because they're often applied like a stamp on an image and can easily be edited out.
This new technology is added as an invisible layer on top of the image. It cannot be removed whether cropped or edited, or even when filters are added. While they do not interfere with the image, they will stop bad actors when they try to alter the image to create deepfakes or other manipulative iterations.
It should be noted that while special watermarking techniques also exist, this technique is different as it uses pixel altering as a way to safeguard images. While watermarking allows users to detect alterations through detection
Read more on tech.hindustantimes.com