Samsung officially banned its employees from using generative AI tools like ChatGPT over «growing concerns about security risks presented by generative AI.» Meanwhile, in Italy, the nationalban on ChatGPT(opens in new tab) was lifted after OpenAI complied with the orders of the nation's privacy regulator's demand for more disclosure and privacy tools.
In a memo viewed by Bloomberg News(opens in new tab), Samsung told staff that using AI tools like Google Bard and Bing that store information on external servers would pose a security risk. The ban will apply to its internal networks and company-owned devices such as PCs, phones, and tablets. Last month, it limited the use of the AI chatbot after some staff inadvertently leaked confidential information(opens in new tab) multiple times.
The memo reads, “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
Employees still using AI tools have been warned not to submit any company information involving any Samsung intellectual properties or risk «disciplinary action up to and including termination of employment.”
Samsung is developing internal AI tools for translating and summarizing documents. The main issue is that conversations with AI chatbots are all used to train its language learning model. So when you ask to summarize notes from a secret product meeting, the details are stored on a server you cannot access.
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” Samsung wrote.
ChatGPT recently added an „incognito“ mode that prevents your chats from being used to
Read more on pcgamer.com