Generative AI may well be en vogue right now, but when it comes to artificial intelligence systems that are way more capable than humans, the jury is definitely unanimous in its view. A survey of American voters showed that 63% of respondents believe government regulations should be put in place to actively prevent it from ever being achieved, let alone be restricted in some way.
The survey, carried out by YouGov for the Artificial Intelligence Policy Institute (via Vox) took place last September. While it only sampled a small number of voters in the US—just 1,118 in total—the demographics covered were broad enough to be fairly representative of the wider voting population.
One of the specific questions asked in the survey focused on «whether regulation should have the goal of delaying super intelligence.» Specifically, it's talking about artificial general intelligence (AGI), something that the likes of OpenAI and Google are actively working on trying to achieve. In the case of the former, its mission expressly states this, with the goal of «ensur[ing] that artificial general intelligence benefits all of humanity» and it's a view shared by those working in the field. Even if that is one of the co-founders of OpenAI on his way out of the door...
Regardless of how honourable OpenAI's intentions are, or maybe were, it's a message that's currently lost on US voters. Of those surveyed, 63% agreed with the statement that regulation should aim to actively prevent AI superintelligence, 21% felt that didn't know, and 16% disagreed altogether.
The survey's overall findings suggest that voters are significantly more worried about keeping «dangerous [AI] models out of the hands of bad actors» rather than it being of benefit to us all. Research into new, more powerful AI models should be regulated, according to 67% of the surveyed voters, and they should be restricted in what they're capable of. Almost 70% of respondents felt that AI should be regulated like a «dangerous
Read more on pcgamer.com