Some artificial intelligence programs are marking images from the war in Israel and Palestine as fake, despite them being real photographs depicting real horrors taking place.
A popular free AI image detector labeled a photo of an infant, body burned, killed in Hamas’ recent attack on Israel as being generated by AI even though it is likely real, 404Media reports.
The image was examined by Hany Farid, a professor at UC Berkeley and one of the world’s leading experts on digitally manipulated images. He says that the image, which was first tweeted by Israel’s official Twitter account, doesn’t show any signs it was created by AI.
The idea that the image was fake was first started by Ben Shapiro, a conservative Jewish commentator who tweeted the accusation Thursday morning. Later in the day, Jackson Hinkle shared the image as well on the platform along with a screenshot from the AI image detection tool “AI or Not” showing the image was fake.
A new context note now added to the tweet notes that the site AI or Not is unreliable at detecting images and gives users different results for the same image.
Farid notes that one of the easiest ways to detect an AI image is how lines appear in it. AI generators currently have issues with creating highly structured shapes and straight lines. Ensuring shadows are consistent with the image is also an easy way to detect an AI image, in that AI often struggles with creating correct shadowing in images.
In both cases for the image in question, the image passes the test. The shadows on the table are consistent with an overhead light source, and the leg on the table is straight.
Farid warms that the tools currently available online aren’t a good way to determine if an image is real or
Read more on pcmag.com