Your recent ChatGPT queries may have accidentally leaked to other users.
ChatGPT shows a history of the past prompts you entered into the AI program on its chat history sidebar. But on Monday, some users spotted conversation histories that were not their own.
The discrepancy was especially stark for a user on Reddit who spotted(Opens in a new window) conversations conducted in Chinese. Others saw the issue and immediately feared(Opens in a new window) their accounts had been hacked.
OpenAI didn’t immediately respond to a request for comment. So it’s unclear if ChatGPT was truly surfacing chat histories from other profiles, or if the program was simply making up the past prompts. But OpenAI reported(Opens in a new window) an outage for ChatGPT at around 10 a.m. PST after users noticed the problem with chat histories.
The apparent bug is stirring concerns about user privacy. If you entered any personal information in a chat history, then the error could potentially surface the same information to a stranger.
Fortunately, users who’ve encountered the problem say(Opens in a new window) ChatGPT will only surface the title of the past chat—not the full prompt with all the details. If you click on a past chat, the program will fail to load them.
It also looks like in many cases ChatGPT will only surface generic terms when showing chat histories possibly taken from past users, omitting any personally identifiable information. Thus, it should be unclear which person the chat history came from.
That said, the apparent bug underscores a risk of using ChatGPT: like any public web service, it too can suffer a potential data breach. The privacy policy(Opens in a new window) for OpenAI also notes it can share “aggregated
Read more on pcmag.com