When the U.S. Supreme Court decides in the coming months whether to weaken a powerful shield protecting internet companies, the ruling also could have implications for rapidly developing technologies like artificial intelligence chatbot ChatGPT.
The justices are due to rule by the end of June whether Alphabet Inc's YouTube can be sued over its video recommendations to users. That case tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.
What the court decides about those issues is relevant beyond social media platforms. Its ruling could influence the emerging debate over whether companies that develop generative AI chatbots like ChatGPT from OpenAI, a company in which Microsoft Corp is a major investor, or Bard from Alphabet's Google should be protected from legal claims like defamation or privacy violations, according to technology and legal experts.
That is because algorithms that power generative AI tools like ChatGPT and its successor GPT-4 operate in a somewhat similar way as those that suggest videos to YouTube users, the experts added.
"The debate is really about whether the organization of information available online through recommendation engines is so significant to shaping the content as to become liable," said Cameron Kerry, a visiting fellow at the Brookings Institution think tank in Washington and an expert on AI. "You have the same kinds of issues with respect to a chatbot."
Representatives for OpenAI and Google did not respond to requests for comment.
During arguments in February, Supreme Court justices expressed uncertainty over whether to weaken the
Read more on tech.hindustantimes.com