European Parliament today agreed on how its proposed rules on AI will look ahead of being formally agreed upon by EU member states. The new rules aim to make it easier to spot when content has been AI generated, including deep fake images, and would completely outlaw AI's use in biometric surveillance, emotion recognition, and predictive policing.
The new rules would mean AI tools such as OpenAI's ChatGPT would have to make it clear that content is AI generated, and would have some responsibility for ensuring users know when an image is a deep fake or the real deal. That seems a mighty task, as once the image is generated it's tough to limit how a user shares it, but that might be something these AI companies have to figure out in the near future.
If these new rules were to pass through European Parliament as is, AI models would need to release «detailed summaries» of copyrighted data used in training to the public. For OpenAI, specifically, this would force it to unveil its training data for its massive GPT-3 and GPT-4 models used today, which are currently not available to peruse. There are some big datasets used for training AI models that already make this data available, such as LAION-5B.
There would also be AI uses that are entirely prohibited, specifically those that could encroach on EU citizens' privacy rights.
These rules are yet to actually be enshrined into law just yet. Ahead of that, member states get to jump in with any propositions of their own, and that process will begin later today. Expect the finalised rules for AI to look similar to these proposed ones, however. The EU seems dead set on making sure it has the jump on AI and its potential uses—in as well any government can, anyways.
Best SSD for
Read more on pcgamer.com