Time and again, experts have warned humanity about the dangers of artificial intelligence, stating that it can be used for nefarious purposes by bad actors if adequate checks are not introduced. Now, OpenAI, one of the pioneers in the field and the company behind the hugely popular AI chatbot, ChatGPT, is voicing the same concerns. OpenAI, as spotted by Bleeping Computer, has published a detailed report explaining how it stopped more than 20 operations that were leveraging ChatGPT to create malware, spreading misinformation about Indian elections on social media, and more.
Also Read: Vivo X200 series launched with MediaTek Dimensity 9400 SoC- All details
OpenAI, in its official report, states that the company has worked to identify patterns and trends that demonstrate how generative AI comes into play when bad actors use it to create malware and facilitate other nefarious goals.
One of the bad actors that OpenAI identified was SweetSpecter, reportedly of Chinese origin. OpenAI claims that they were targeted directly and received phishing emails containing viruses masquerading as support request zip files.
OpenAI says that in May, when India's general elections were in process, an Israel-origin commercial company dubbed Zero Zeno tried to generate social media comments about the elections in India. The company states that it was able to disrupt the operation in less than 24 hours after it began. Later, the company also stopped a similar operation ahead of the European Parliamentary elections.
Also Read: iOS 18.1 releasing soon: Apple rolls out new beta ahead of big launch
Another major group that used ChatGPT was Cyb3rAv3ngers. They targeted industrial systems and infrastructure in Western countries. They utilised ChatGPT by asking it to produce default credentials, develop Bash and Python scripts, and more. The Iranian group also concealed their activities and exploited vulnerabilities to gain access to user passwords on macOS.
OpenAI also detailed another nefarious
Read more on tech.hindustantimes.com