To no one’s surprise, criminals are tapping open-source generative AI programs for all kinds of heinous acts, including developing malware and phishing attacks, according to the FBI.
The agency on Friday held a call with journalists to discuss how generative AI programs—which are all the rage in the tech industry—are also fueling cybercrime. The malicious activities range from using the AI programs to refine and pump out scams to terrorists consulting the technology to help them build more potent chemical attacks.
“We expect over time as adoption and democratization of AI models continues, these trends will increase,” says a senior FBI official.
The FBI wouldn’t identify the specific AI models criminals are using. But the official noted hackers are gravitating toward free, customizable open-source models, along with private hacker-developed AI programs, which are circulating in the cybercriminal underworld for a fee.
The official added that seasoned cybercriminals are exploiting the AI technology to develop new malware attacks and better delivery methods for them, including using AI-generated websites as phishing pages that can secretly deliver malicious computer code. The same technology is helping hackers develop polymorphic malware, which can evade antivirus software.
Last month, the FBI also warned that scammers are using AI image generators to create sexually themed deepfakes of victims in an effort to extort money from them. The exact scale of these various AI-powered schemes remains unclear. But in today’s call, the FBI agent added that the bulk of the cases the agency is seeing involve criminal actors using AI models to bolster their traditional schemes. This includes attempts to defraud loved ones or the
Read more on pcmag.com