European Union policymakers and lawmakers clinched a deal on Friday on the world's first comprehensive set of rules regulating the use of artificial intelligence (AI) in tools such as ChatGPT and in biometric surveillance.
They will thrash out details in the coming weeks that could alter the final legislation, which is expected to go into force early next year and apply in 2026.
Until then, companies are encouraged to sign up to a voluntary AI Pact to implement key obligations of the rules.
Here are the key points that have been agreed:
So-called high-risk AI systems - those deemed to have significant potential to harm health, safety, fundamental rights, the environment, democracy, elections and the rule of law - will have to comply with a set of requirements, such as undergoing a fundamental rights impact assessment, and obligations to gain access to the EU market.
AI systems considered to pose limited risks would be subject to very light transparency obligations, such as disclosure labels declaring that the content was AI-generated to allow users to decide on how to use it.
The use of real-time remote biometric identification systems in public spaces by law enforcement will only be allowed to help identify victims of kidnapping, human trafficking, sexual exploitation, and to prevent a specific and present terrorist threat.
They will also be permitted in efforts to track down people suspected of terrorism offences, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation and environmental crime.
GPAI and foundation models will be subject to transparency requirements such as drawing up technical documentation, complying with EU copyright law and disseminating detailed
Read more on tech.hindustantimes.com