You can now earn money from ChatGPT developer OpenAI by finding bugs in the popular chatbot program. The company today announced a bug bounty program that offers cash rewards in exchange for reporting security vulnerabilities in OpenAI’s systems.
“Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries,” OpenAI says(Opens in a new window). The program is being run through Bugcrowd, a bug bounty platform.
However, OpenAI won’t accept jailbreaks for ChatGPT or text prompts intended to trick the AI program to violate its own rules. Since ChatGPT first emerged, users have found ways to jailbreak(Opens in a new window) the chatbot to post swear words, write about banned political topics, or even create malware.
The bug bounty program also won’t accept reports of ChatGPT generating incorrect facts. “Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed,” OpenAI says. “Addressing these issues often involves substantial research and a broader approach.” (Users can report model safety issues using a separate form(Opens in a new window).)
Instead, OpenAI’s bug bounty program(Opens in a new window) focuses on flaws pertaining to user privacy and cybersecurity on the company's web domains and APIs. Last month, OpenAI apologized for a bug that briefly caused ChatGPT to leak payment details and chat histories for some users.
The company’s bug bounty program promises to help OpenAI uncover similar weaknesses with how ChatGPT processes user data, which has begun to include third-party access via the ChatGPT API and plugin store.
In addition, the program’s scope also permits users to uncover bugs involving
Read more on pcmag.com