While generative AI is rapidly advancing, it also raises concerns about its potential to be used maliciously. Generative artificial intelligence (AI) models may be susceptible to bias, as they learn patterns and generate output/predictions based on the data they are trained on. If the training data is biased or incomplete, the model's output can also be incorrect/biased. Also, given that AI language models can generate human-like text and can be trained to impersonate the writing style of humans, there are also serious concerns about its potential misuse for spreading fake news.
The other interesting concept being whether Generative AI are intermediaries can claim a safe harbour for the content published on their platforms. It is important to observe that, unlike search engines that only provide links to webpages/content available on the internet, Generative AI processes available data and generates an independent output. Hence, it may be difficult for all Generative AI platforms to be categorized as intermediaries under the law. Also, since, there are varied parties involved in the ChatGPT / Generative AI (GAI) ecosystem (third-party data owners, GAI companies, platform providers, and users), there could be multiple IP claimants, hence, the ownership rights in the output generated from such systems is highly contentious.
Moreover, there is limited guidance or obligation on the accountability of a GAI system and the way the output has been arrived at and this could lead to issues such as bias, accountability, and explainability. Additionally, the protection of user data and user rights is complex. It may not be possible to seek user consent when data is scraped from the internet. In such scenarios, the implementation of
Read more on tech.hindustantimes.com