To prevent artificial intelligence from destroying society, OpenAI is asking the public for realistic ideas on how the company’s programs could cause a catastrophe.
The request is a contest of sorts. This "Preparedness Challenge" will award the top 10 entries with $25,000 in API credits for access to the company’s various programs.
"Imagine we gave you unrestricted access to OpenAI’s Whisper (transcription), Voice (text-to-speech), GPT-4V, and DALLE·3 models, and you were a malicious actor. Consider the most unique, while still being probable, potentially catastrophic misuse of the model," OpenAI says.
In other words, what’s the worst you could realistically do if you had access to OpenAI’s most advanced programs? It’s obvious the models could pump out misinformation or be exploited to perpetuate scams. However, OpenAI is looking for more “novel ideas” that the company may have overlooked.
Interested participants will need to outline their idea, including the steps required to pull it off, and a way to measure “the true feasibility and potential severity of the misuse scenario” described. In addition, the company is asking for ways to mitigate the potential threat. The challenge runs until Dec. 31.
OpenAI introduced it as part of a new “Preparedness team” that the company is launching to prevent future AI programs from being a danger to humanity. The team will focus on building a framework to monitor, evaluate, and even predict the potential dangers of “frontier AI” systems.
The team will also look at how future AI systems could pose catastrophic risks to several areas, including cybersecurity, “chemical, biological, radiological, and nuclear” threats, along with how artificial intelligence could be used toward
Read more on pcmag.com