OpenAI is adding a new privacy control to ChatGPT that can address concerns about the AI program tapping users’ conversation histories to improve itself.
On Tuesday, the company added the option to turn off chat history for ChatGPT, which can also prevent OpenAI from using your queries to augment the program. “Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar,” OpenAI said in a blog post(Opens in a new window).
The new control is rolling out now to users through the settings tab found in the three-dot menu next to the user account. An option called “Data controls” should appear, enabling you to toggle off the “chat history & training storage” mode. (Previously, users had to go to a Google doc form(Opens in a new window) to opt out of the data collection.)
There is a catch, though: OpenAI will still store your conversations even if the chat history has been turned off. But the company will only retain the information “for 30 days and review them only when needed to monitor for abuse, before permanently deleting.”
OpenAI is introducing the privacy control a month after a bug caused ChatGPT to briefly leak conversation histories from random users. There’s also been worries about people submitting proprietary data to ChatGPT, which the program can then use to train itself on.
A fiction writer learned this the hard way when she encouraged users on TikTok to tap ChatGPT to get feedback on their writing. She later backtracked(Opens in a new window), and warned that the program could promote plagiarism by regurgitating content from other writers. “Which means if you give it your intellectual property, it could then spit it out
Read more on pcmag.com