Generative AI has swept over the digital landscape with a tsunami of unprecedented innovation. Consumers across the globe are using applications like OpenAI's ChatGPT, Bard, DALL-E, Midjourney, and DeepMind for content creation, ideation, and problem solving, or just simple fun. According to Nerdy Nav, the highest percentage of ChatGPT users are from the United States (15.22%), followed by India (6.32%).
Like any new technology though, generative AI too raises concerns about data privacy since it processes personal data and generates information that is potentially sensitive. The AI interactions can inadvertently be collecting personal data, such as names, addresses, and contact details of a user.
For instance, Google's Bard has been facing flak for the possibility that it is trained on Gmail data of users. Also, according to Reuters, Google parent Alphabet has been cautioning its employees not to enter confidential information into chatbots, not excluding its own Bard.
The fact that OpenAI's ChatGPT hasn't been able to get too far in the European Union (EU), a regulation champion with strict data rules with only 3.98% user base of the total users globally, should alert us that generative AI should be handled with caution. In fact, the first known instance of a chatbot being blocked by a government order came in April when ChatGPT was banned in Italy over privacy concerns.
According to AI/ML developers, lack of data is the prime reason preventing the development of further AI models. Like the proverbial snake that swallows its own tail, generative AI is a great source of data for AI models, while data is the most significant component of making generative AI models.
In the end, AI is, after all, a technology, and as with any
Read more on tech.hindustantimes.com