Artificial intelligence lab OpenAI published a blog post Monday seeking to address fears that its technology will meddle with elections, as more than a third of the globe prepares to head to the polls this year. The use of AI to interfere with election integrity has been a concern since the Microsoft-backed company released two products: ChatGPT, which can mimic human writing convincingly, and DALL-E, whose technology can be used to create "deepfakes," or realistic-looking images that are fabricated. Those worried include OpenAI's own CEO Sam Altman, who testified in Congress in May that he was "nervous" about generative AI's ability to compromise election integrity through "one-on-one interactive disinformation."
The San Francisco-based company said that in the United States, which will hold presidential elections this year, it is working with the National Association of Secretaries of State, an organization that focuses on promoting effective democratic processes such as elections.
We are now on WhatsApp. Click to join.
ChatGPT will direct users to CanIVote.org when asked certain election-related questions, it added.
The company also said it is working on making it more obvious when images are AI-generated using DALL-E, and is planning to put a "cr" icon on images to indicate it was AI-generated, following a protocol created by the Coalition for Content Provenance and Authenticity.
It is also working on ways to identify DALL-E-generated content even after images have been modified.
In its blog post, OpenAI emphasized that its policies prohibit its technology to be used in ways it has identified as potentially abusive, such as creating chatbots pretending to be real people, or discouraging voting.
It also prohibits DALL-E from creating images of real people, including political candidates, it said.
The company faces challenges policing what is actually happening on its platform.
When Reuters last year tried to create images of Donald Trump and Joe Biden, the request was
Read more on tech.hindustantimes.com