AI-generated voices can be good, or they can be bad—as Witcher voice actor Doug Cockle said, it really all comes down to the intent of the user. How to effectively address the potential for misuse is a big question, but the US Federal Communications Commission has now taken a step to crack down on at least one avenue of misuse by outlawing AI-generated voices in robocalls.
Robocalls are automated phone calls that play a pre-recorded message, commonly used for telemarketing or political campaigns. They're an aggravation at the best of times, but the rise of AI makes them especially dangerous because instead of a random voice, a person might hear Clint Eastwood, Gregory Peck, or US president Joe Biden. That last one actually happened in January, when a fake-Biden-voiced robocall went out to more than 20,000 people, urging Democrats not to vote.
Scamming people over the telephone is obviously illegal, but the FCC's new rule makes the act of using AI to generate the voices used in robocalls itself illegal. The agency said the new regulation expands «the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law.»
«Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,» FCC Chairwoman Jessica Rosenworcel said in the announcement of the new rule. «State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.»
The declaratory ruling was made under the Telephone Consumer Protection Act, which allows for fines of $500 to $1,500 per violation—which is to say, per call. As you might imagine, that can add up in a hell of a hurry: In August 2023, the FCC levied a $300 million fine on the perpetrators of an auto warranty robocall scam that made more than five billion robocalls to more than 500
Read more on pcgamer.com