Tech companies aren’t the only ones scrambling to create AI-powered chatbots. Cybercriminals are getting in on the action, too.
Earlier this month, a hacker was found to be developing WormGPT, a ChatGPT-like bot that can help buyers create phishing messages and malware. Now security researchers have spotted another evil chatbot, this one aptly named FraudGPT.
The developer of FraudGPT began publicizing the malicious chat program over the weekend in a hacking forum, according to cloud security provider Netenrich(Opens in a new window). “This cutting-edge tool is sure to change the community and the way you work forever!” the developer claims.
The bot offers similar features to WormGPT. A chat box can receive your prompts and respond accordingly. In a video demo, FraudGPT quickly pumps out an effective SMS phishing message pretending to be a bank. The bot can also be programmed to supply intel on the best websites to commit credit card fraud against. In addition, it can provide non Verified by Visa bank identification numbers to help the user steal credit card access.
FraudGPT’s developer also appears to traffic in hacked information, such as stolen credit card numbers, while also offering guides to committing fraud. So it’s possible all this information could be fused into the chatbot service.
The developer of FraudGPT didn’t respond to a request for comment, making it unclear what large language model the program is using. But the bot isn’t cheap. The developer is charging $200 per month to use the malicious chatbot, making it more expensive than WormGPT, which asks 60 Euros per month.
It's unclear whether either chatbot is actually useful as a hacking tool. But Netenrich warns the technology could lower the bar for
Read more on pcmag.com