ChatGPT developer OpenAI is starting to explore using artificial intelligence to automate cybersecurity work.
The company is opening a $1 million grant to fund projects that will use AI to bolster cybersecurity. This could include using AI to automatically patch vulnerabilities, detecting and thwarting social engineering attacks, and even nudging consumers toward best security practices.
In fact, OpenAI is asking for proposals covering 16 different areas, including the idea of using AI to create “honeypots and deception technology,” which could misdirect and lure hackers into a trap. But for now, the company is refraining from awarding grants for “offensive-security projects” — something AI technologies could also be adept at producing.
“All projects should be intended to be licensed or distributed for maximal public benefit and sharing, and we will prioritize applications that have a clear plan for this,” the company wrote in the announcement(Opens in a new window).
The scope of the grant program suggests OpenAI sees potential in using ChatGPT-like technologies to automate large parts of cybersecurity work. The company itself says it's sponsoring the funding, noting artificial intelligence could help turn the tide in the ongoing war to stop computer hacks.
The cybersecurity industry is also facing a scarcity of workers. So the technology could address the shortage in cybersecurity talent, even though AI is sparking fears it could take human jobs in other areas. But we'll have to wait and see if AI can effectively bolster cybersecurity, or if it'll simply produce false positives that only waste time and resources.
OpenAI plans on distributing the funds through increment of $10,000, which could include direct
Read more on pcmag.com