A number of AI tools are now publicly available, from Google Bard and Bing AI to the one that got this ball rolling: OpenAI's ChatGPT. But what can these artificial intelligence models actually do? We'll soon find out, as hackers put the biggest names in AI to the test at Defcon 31.
The hacker convention is calling on(Opens in a new window) attendees at this year's gathering to "find bugs in large language models built by Anthropic, Google, Hugging Face, Nvidia, OpenAI, and Stability."
Defcon 31(Opens in a new window) is scheduled for Aug. 10-13 in Las Vegas. The AI effort is being organized by AI Village, in partnership with Humane Intelligence(Opens in a new window), SeedAI(Opens in a new window), and the AI Vulnerability Database(Opens in a new window). The White House Office of Science, Technology, and Policy(Opens in a new window) is also involved, as is the National Science Foundation’s Computer and Information Science and Engineering Directorate, and the Congressional AI Caucus.
"This is the first time anyone is attempting more than a few hundred experts to assess these models," according to Defcon organizers. "The more people who know how to best work with these models, and their limitations, the better. This is also an opportunity for new communities to learn skills in AI by exploring its quirks and limitations."
AI Village organizers will provide laptops, access to each model, and a prize for the person who is able to most thoroughly test each one. "We will be providing a capture the flag (CTF) style point system to promote testing a wide range of harms," the organizers say. "The individual who gets the highest number of points wins a high-end Nvidia GPU."
Participants will include expert researchers as well as
Read more on pcmag.com