Artists under siege by artificial intelligence (AI) that studies their work, then replicates their styles, have teamed with university researchers to stymy such copycat activity.
US illustrator Paloma McClain went into defense mode after learning that several AI models had been "trained" using her art, with no credit or compensation sent her way.
"It bothered me," McClain told AFP.
"I believe truly meaningful technological advancement is done ethically and elevates all people instead of functioning at the expense of others."
The artist turned to free software called Glaze created by researchers at the University of Chicago.
Glaze essentially outthinks AI models when it comes to how they train, tweaking pixels in ways indiscernible by human viewers but which make a digitized piece of art appear dramatically different to AI.
We are now on WhatsApp. Click to join .
"We're basically providing technical tools to help protect human creators against invasive and abusive AI models," said professor of computer science Ben Zhao of the Glaze team.
Created in just four months, Glaze spun off technology used to disrupt facial recognition systems.
"We were working at super-fast speed because we knew the problem was serious," Zhao said of rushing to defend artists from software imitators.
"A lot of people were in pain."
Generative AI giants have agreements to use data for training in some cases, but the majority if digital images, audio, and text used to shape the way supersmart software thinks has been scraped from the internet without explicit consent.
Since its release in March of 2023, Glaze has been downloaded more than 1.6 million times, according to Zhao.
Zhao's team is working on a Glaze enhancement called Nightshade that notches up
Read more on tech.hindustantimes.com