Multiple employees of Samsung's Korea-based semiconductor business plugged lines of confidential code into ChatGPT, effectively leaking corporate secrets that could be included in the chatbot's future responses to other people around the world.
One employee copied buggy source code from a semiconductor database into the chatbot and asked it to identify a fix, according(Opens in a new window) to The Economist Korea. Another employee did the same for a different piece of equipment, requesting "code optimization" from ChatGPT. After a third employee asked the AI model to summarize meeting notes, Samsung executives stepped in. The company limited each employee’s prompt to ChatGPT to 1,024 bytes.
Just three weeks earlier, Samsung had lifted its ban on employees using ChatGPT over concerns around this issue. After the recent incidents, it's considering re-instating the ban, as well as disciplinary action for the employees, The Economist Korea says.
“If a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network," reads an internal memo. "As soon as content is entered into ChatGPT, data is transmitted and stored to an external server, making it impossible for the company to retrieve it.”
The OpenAI user guide(Opens in a new window) warns users against this behavior: "We are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations." It says the system uses all questions and text submitted to it as training data.
The use of chatbots to find and fix buggy code has become pervasive within software engineering. If users ask a coding question, it attempts to identify the solution and
Read more on pcmag.com