It's understandable to feel rattled by the persuasive powers of artificial intelligence. At least one study has found people were more likely to believe disinformation generated by AI than humans. The scientists in that investigation concluded that people preferred the way AI systems used condensed, structured text. But new research shows how the technology can be used for good.
A recent study conducted by researchers at the Massachusetts Institute of Technology has validated something many AI watchers long suspected: The technology is remarkably persuasive when reinforced with facts. The scientists invited more than 2,000 people who believed in different conspiracy theories to summarize their positions to a chatbot — powered by OpenAI's latest publicly available language model — and briefly debate them with the bot. On average, participants subsequently described themselves as 20% less confident in the conspiracy theory; their views remained softened even two months later.
While companies like Alphabet Inc.'s Google and Meta Platforms Inc. might use persuasive chatbots for advertising, given their heavy reliance on ads for revenue, that's far off at best and unlikely to happen at all, people in the ad industry tell me. For now, a clearer and better use case is tackling conspiracy theories, and the MIT researchers reckon there's a reason generative AI systems do it so well: they excel at combating the so-called Gish gallop, a rhetorical technique that attempts to overwhelm someone in a debate with an excessive number of points and arguments, even when thin on evidence. The term is named after American creationist Duane Gish, who had a rapid-fire debating style in which he'd frequently change topics; those who believe in conspiracy theories tend to do the same.
“If you're a human, it's hard to debate with a conspiracy theorist because they say, ‘What about this random thing and this random thing?'” says David Rand, one of the MIT study's authors. “A lot of times, the
Read more on tech.hindustantimes.com