The FTC is turning its attention to ChatGPT. Outlets such as the Washington Post write that the US regulator is looking into OpenAI's popular bot and if it's breached consumer protection laws.
Earlier in the week, the FTC sent a 20-page document to OpenAI calling on the company to address risks related to AI models. In the past, it's repeatedly stressed that AI isn't exempt from consumer protection laws, and its list of demands underline how seriously it takes the matter.
For video games, this investigation comes as ChatGPT (and AI in general) is being used by game developers. Though some have been using it to help with coding or writing tasks, others have concerns about it being used as a way to reduce labor and lead to more employee cuts.
According to the document, the chief concern for the FTC is if ChatGPT has caused reputational harm to consumers. Among its demands is a list of detailed descriptions for any and all complaints received by OpenAI about ChatGPT making "false, misleading, disparaging or harmful" about people.
The agency also called for research and test data gleaning how well consumers know "the accuracy or reliability of outputs" made by OpenAI tools, the way the company advertises its products, and complaint records about those aforementioned false statements.
Starting out, ChatGPT was touted as a fun little tool to create fake conversations with real-life people, or create its own games. But there've always been concerns about what it could mean plagiarism going forward, and its effect on the entire creative field.
Hollywood writers and actors, for example, are both presently striking in part because of their concerns AI willl have on their respective fields. The worry is that studios will use an
Read more on gamedeveloper.com