The European Union's AI regulation has some predicting a spate of Brussels copycats. Close, but not quite.
"It is the AI moment."
So went the declaration from International Telecommunications Union Secretary-General Doreen Bodgan-Martin at the conclusion of a UN summit in Geneva on 7 July 2023.
At a historic UN Security Council meeting 11 days later, Secretary-General António Guterres agreed. So did nations and regulators.
A desire has emerged from powerful quarters to protect citizens from the potential harms of AI — issues that are known (discrimination, privacy violations, copyright theft) and those which are not. Yet.
Most nations have approached issues like this by allowing sectors to individually regulate AI, such as aircraft design and flight safety. The infamous Boeing 737 MAX — which was grounded for over 18 months following two crashes within five months that killed 346 people — is one egregious example of regulatory failure.
Other fields that have proactively regulated on AI include medical information (presiding over robot surgery and scan analysis), automated vehicles (the yet-to-materialise Tesla robot taxis and 'Full Self Drive' [sic]) and policing social media networks to protect against harms like disinformation.
Some countries, such as the US, Japan and the UK, don't see the need for regulation to go beyond the combination of adaptive sectoral regulation and potential international agreements supplementing more speculative risks discussed in the so-called G7 Hiroshima Process.
Others want to go further.
More can be done. Generic laws could regulate AI across broader society. China has already published its law governing AI as part of its social control measures, which includes internet filtering through its
Read more on tech.hindustantimes.com