Microsoft is bringing the technology behind ChatGPT to the cybersecurity industry by designing a program smart enough to help IT professionals fend off attacks.
Security Copilot is a virtual assistant that can help IT staffers analyze and pounce on security threats facing their organization. “With Security Copilot, defenders can respond to security incidents within minutes instead of hours or days,” the company says(Opens in a new window).
The program is essentially an analysis tool that incorporates the capabilities of OpenAI’s newest GPT-4 language model, which can sum up libraries of text, write professional-grade responses, and even program computer code.
Like ChatGPT, Security Copilot(Opens in a new window) functions in a prompt bar. In a demo, Microsoft showed you can ask it for a summary about a new vulnerability, submit a suspected malicious file for analysis, or report the latest security incidents that occurred inside an internal network.
In return, Security Copilot can fetch data from Microsoft’s other security products, including the company’s threat intelligence, to come up with the appropriate response.
In another example, the program was smart enough to analyze the source of an attack, including which device was infected, through what domain, and the system processes involved. An IT security analyst can also use the tool to scan a corporate network for emails and logins for patterns that match suspected threats.
The program’s other powerful capability is a “prompt book,” a collection of text inputs that can automate Security Copilot to handle a task. In the demo, Microsoft showed one such prompt causing Security Copilot to reverse-engineer a malicious script in seconds, generating a report that
Read more on pcmag.com