AI shock to the system! Researchers fool ChatGPT to reveal personal data using a simple prompt
A team of artificial intelligence (AI) researchers has successfully exploited a vulnerability in OpenAI's generative AI model ChatGPT, as per a study published by them. Researchers used a simple prompt to trick the chatbot into revealing personal information of individuals including name, email address, phone number, and more. Surprisingly, the study claimed that the team was able to repeat the exploit multiple times to extract 10,000 unique verbatim memorized training examples. The extracted personal information is believed to be embedded deep into the system's training data, which it should not be able to divulge, and is a major privacy concern.