The generative artificial intelligence revolution has brought new avenues of growth and potential for massive advancements, but along with that have come new threats to security and privacy. Recently we have seen instances of cybercrimes emerge where criminals and bad actors have been leveraging generative AI for malicious purposes. Just last month, a man in Thiruvananthapuram was scammed of Rs. 40,000 after the scammer used deepfakes to impersonate someone the victim knew. These crimes, which now use advanced AI malware, have risen so much that even the Federal Bureau of Investigation (FBI) in the USA had to issue a warning to the public to raise awareness of such crimes.
According to a report by PCMag, The US agency held a meeting with journalists to discuss how generative AI malware is fueling cybercrimes. We expect over time as adoption and democratization of AI models continues, these trends will increase,” one official was quoted as saying. While the agency did not name any particular platform, it did highlight that criminals were leaning towards free, customizable, and open-source platforms. Private hacker-developed AI programs have also become popular in this niche, as per the report.
There are different ways hackers and scammers are using AI technology to fulfill their malicious plans. One of the most popular methods includes using AI to create deepfakes of people the victim might know and to fool them. Software is capable of creating fake videos of people as well as changing the voice to resemble them. Such video and audio calls are then used to fool unsuspecting victims, like in the case mentioned above.
But things get worse. Another method includes using AI to create malware. These can be phishing tools,
Read more on tech.hindustantimes.com