The Group of Seven nations are preparing to ask tech companies to agree to a set of rules to mitigate the risks of artificial intelligence systems as part of a proposal aimed at uniting the divided approaches in Europe and the US.
The 11 draft guidelines, which will be voluntary, include external testing of AI products before they're deployed, public reports on security measures and controls to protect intellectual property, according to a copy that was seen by Bloomberg News and may be agreed to next week in Japan. The document is still being discussed and its contents and the timing of an announcement may still change.
Still, the countries — Canada, France, Germany, Italy, Japan, the UK and US — are divided about whether the companies' progress should be monitored, people familiar with the matter said. While the US is opposed to any oversight, the European Union is pushing for a mechanism that would check on compliance and publicly name companies that had run afoul of code, said the people, who asked not to be identified because the negotiations are private.
After OpenAI's ChatGPT service set off a race among tech companies to develop their own artificial intelligence systems and applications, governments around the world began grappling with how to enforce guardrails on the disruptive technology while still taking advantage of the benefits.
The EU will likely be the first Western government to establish mandatory rules for AI developers. Its proposed AI Act is in final negotiations with the aim of reaching a deal by the end of the year.
The US has been pushing for the other G-7 countries to adopt voluntary commitments it agreed to with companies, including OpenAI, Microsoft Corp. and Alphabet Inc.'s Google in July.
Read more on tech.hindustantimes.com