A nonprofit group is calling on the Federal Trade Commission to bar OpenAI from offering GPT-4 until the company can install safeguards around the AI program.
The Center for AI and Digital Policy today filed a complaint(Opens in a new window) with the FTC that accuses OpenAI of not doing enough to prevent ChatGPT from being abused or causing mayhem.
“The Federal Trade Commission has declared(Opens in a new window) that the use of AI should be ‘transparent, explainable, fair, and empirically sound while fostering accountability.’ OpenAI’s product GPT-4 satisfies none of these requirements. It is time for the FTC to act,” the group says.
The nonprofit cites the FTC’s own past statements—which demand that companies responsibly commercialize and market their AI technologies—as grounds for an investigation into OpenAI. Just last week, the commission published a blog post(Opens in a new window), warning companies against ignoring the risks from AI chatbot technologies.
“Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors,” the FTC wrote. “Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”
The nonprofit argues OpenAI has been offering GPT-4, the company’s latest large language model, without proper safeguards in place, thus warranting the FTC’s scrutiny. The group points to OpenAI’s own documentation(Opens in a new window) about GPT-4, which notes the technology poses several safety risks. This could include helping bad actors pump out disinformation, creating malware attacks, along with the AI itself proliferating misinformation to the
Read more on pcmag.com