Apple is joining the growing list of companies banning ChapGPT and other third-party generative AI tools for company use.
According to a document viewed by The Wall Street Journal, Apple is worried that employees using these tools could inadvertently leak confidential information. And considering what happened with Samsung in April, when employees accidentally uploaded private source code not once but three times using ChapGPT, it's really no surprise that Apple does not want to take that risk.
One piece of AI software that Apple specifically mentioned as being banned from use is GitHub Copilot, an AI tool that can generate and write simple code. It's a big time-saver for programmers: The issue is that the data used from AI tools are stored on external servers, which are used to train multiple AI models, including those created and operated by other companies.
For instance, let's say you're working on code meant to be used on a super secret pair of Apple AR glasses; that's information you'd probably don't want to accidentally feed an AI being run by your rivals such as Microsoft or Google. Protecting proprietary information is why companies like Amazon, Verizon, JPMorgan Chase, and Northrop Grumman have banned ChatGPT for the time being.
OpenAI recently updated ChatGPT's privacy options to allow users to delete and disable their chat history, preventing that data from being used to train its large Language model. However, according to its Data Controls FAQ, all conversations are kept for 30 days before permanently deleting to «monitor for abuse,» but nothing else—it's not using that window to squeeze out some bonus LLM training.
In an earnings call, Apple CEO Tim Cook said it's «very important to be deliberate and
Read more on pcgamer.com