America is undergoing a crisis of social trust, whether it be in government, the media, the Federal Reserve or simply people with opposing political views. America is also witnessing a revolution in artificial intelligence, as AI transforms everything from Google's business model to childhood.
These two developments got me thinking: Could the AI revolution somehow be harnessed to bring about a resurgence in social trust, rather than its further collapse?
It's a tall order, I admit. The most recent technology to have taken over the internet, social media, has often been linked with a rise in misinformation and thus a decline in social trust. And much of the recent commentary on AI points to this same risk. Large-language models (LLMs) can be used to create vast quantities of propaganda, possibly swamping the internet.
That is a real risk, but it does not sound so different from the status quo. Tyrants and bad-faith actors already hire humans to flood the internet with bad content.
The more hopeful news is less about content than about curation. The major current LLMs, such as those from Anthropic and OpenAI, are trained with internet data, yet if asked questions about Russia or China, they offer relatively objective answers. Users can also make answers more factual or academic by asking for that.
The point is not that LLMs won't be used to create propaganda — they will — but that they offer users another option to filter it out. With LLMs, users can get the degree of objectivity that they desire, at least after they learn how they work.
Still, this is a safe prediction: Within a year or two, there will be a variety of LLMs, some of them open source, and people will be able to use them to generate the kinds of answers they want.
Read more on tech.hindustantimes.com