UK and EU governments are throwing themselves on the proverbial tram track that is AI ethical standards. The European Commission (EC) has already drafted some laws to regulate the use of AI, but reports suggest it'll take up to a year to actually get them in place.
Right now, we're amid the crossfire in the AI badlands. The law is seemingly being pushed aside while new AI applications are established all over, wholly unregulated.
According to Reuters (via AI News), two lawmakers involved with the EU's proceedings said the debate is tied up on whether facial recognition should be banned, and over who has the right to preside over the rules, and keep the AI in check.
It's a similar situation in the US where there is still no federal regulation of artifical intelligency, but there is reportedly some US AI regulation «on the horizon.» It will apparently take a different form, however, where the detailed framework the EC has proposed is exchanged for an agency-by-agency approach.
The previous draft from the European Commission established some classifications for AI, depending on the level of risk that each system might pose to us as a species. These range from 'limited risk systems' such as chatbots and spam filters, right up to those of 'unacceptable risk'—i.e. anything exploitative, manipulative, or that might «conduct real-time biometric authentication in public spaces for law enforcement.»
That all sounds very Orwellian, but when we've got DeepMind training AIs to control nuclear fusion, you'd think facial recognition would be the least of our worries.
'High risk' AI systems will be required to undergo heavy vetting, and be on some tight reigns in order to operate within the law. Regulations could include anything from
Read more on pcgamer.com