The European Union reached a preliminary deal that would limit how the advanced ChatGPT model could operate, in what's seen as a key part of the world's first comprehensive artificial intelligence regulation.
All developers of general purpose AI systems – powerful models that have a wide range of possible uses – must meet basic transparency requirements, unless they're provided free and open-source, according to an EU document seen by Bloomberg.
These include:
Models deemed to pose a “systemic risk” would be subject to additional rules, according to the document. The EU would determine that risk based on the amount of computing power used to train the model. The threshold is set at those models that use more than 10 trillion trillion (or septillion) operations per second.
Currently, the only model that would automatically meet this threshold is OpenAI's GPT-4, according to experts. The EU's executive arm can designate others depending on the size of the data set, whether they have at least 10,000 registered business users in the EU, or the number of registered end-users, among other possible metrics.
These highly capable models should sign on to a code of conduct while the European Commission works out more harmonized and longstanding controls. Those that don't sign will have to prove to the commission that they're complying with the AI Act. The exemption for open-source models doesn't apply to those deemed to pose a systemic risk.
These models would also have to:
The tentative deal still needs to be approved by the European Parliament and the EU's 27 member states. France and Germany have previously voiced concerns about applying too much regulation to general-purpose AI models and risk killing off European competitors like
Read more on tech.hindustantimes.com