Kaleigha Hayes, a student at the University of Maryland Eastern Shore, is trying to trick an AI chatbot into revealing to her a credit card number — one which may be buried deep in the training data used to build the artificial intelligence model. “It's all about just getting it to say what it's not supposed to,” she tells me.
She was surrounded by a throng of people all trying to do the same thing. This weekend more than 3,000 people sat at 150 laptops at the Caesars Forum convention center in Las Vegas trying to get chatbots from leading AI companies to go rogue, in a special contest backed by the White House, and with the cooperation of leading AI companies.
Since the arrival of ChatGPT and other bots, fears over the potential for abuses and unintended consequences have gripped the public conscious. Even fierce advocates of the technology warn of its potential to divulge sensitive information, promote misinformation or provide blueprints for harmful acts, such as bomb-making. In this contest, participants are encouraged to try the kinds of nefarious ploys bad actors might attempt in the real world.
The findings will form the basis of several reports into AI vulnerabilities that will be published next year. The challenge's organizers say it sets a precedent for transparency around AI. But in this highly controlled environment, it is clearly only scratching the surface.
What took place at the annual Def Con hacking conference provides something of a model for testing OpenAI's ChatGPT and other sophisticated chatbots. Though with such enthusiastic backing from the companies themselves, I wonder how rigorous the supposed “hacks” actually are, or if, as has been a criticism in the past, the leading firms are merely paying lip
Read more on tech.hindustantimes.com