Microsoft Introduces GPT-4-Powered Cybersecurity Assistant It Calls The 'Security Copilot'

Microsoft Security Copilot

The AI field was mostly quiet, with most of the buzz it created was mostly contained, and happened only within its own field.

But when OpenAI introduced ChatGPT as an AI chatbot tool, the internet was quickly captivated. This is because the AI is able to do a wide range of tasks, including writing poetry, technical papers, novels, and essays.

Since then, tech companies are moving fast to add generative AI capabilities to their respective products.

And this time, Microsoft has upped its ante.

After announcing an AI-powered Copilot assistant for its Office apps, Microsoft is shifting its focus towards generative AI-powered cybersecurity.

What it calls the 'Microsoft Security Copilot', it's essentially an assistant for cybersecurity professionals and alike, designed to help them identify breaches and better understand the huge amounts of signals and data available to them daily.

It takes the form of a simply prompt box like any other chatbot, where users can ask things like: "what are all the security incidents in my enterprise?” and have it summarize everything for them.

Powered by OpenAI's GPT-4, the tool is based on Microsoft’s own security-specific model, digesting some 65 trillion daily signals Microsoft collects through its threat intelligence gathering and security-specific skills.

Besides using ots own threat intelligence database, the tool also taps into the database of the Cybersecurity and Infrastructure Security Agency, the National Institute of Standards and Technology’s vulnerability database.

Then, there is also a prompt book feature, which is essentially a set of steps or automations that users can create and bundle into a single easy-to-use button or prompt. This way, those that are involved can create and initiate a list automations step in just a press of a button or prompt, and they can also share the prompts with others to do things like, reverse engineer a script so they don’t have to wait for someone on their team to perform this type of analysis.

Users can even use Security Copilot to create a PowerPoint slide that outlines incidents and the attack vectors.

And since all prompts and responses are saved, investigators should be able to conduct a full audit on the log.

The goal is that, Microsoft wants security professionals and alike to use Security Copilot to help with incident investigations, and quickly summarize events and help with reporting.

Security Copilot is designed as an "assistant." Being a "copilot" and not a pilot means that it's never meant to replace human security analysts.

Instead, it's meant to work with them.

It even includes a pinboard section for co-workers to collaborate and share information. This way, assigned colleagues can all work on the same threat analysis and investigations.

"This is like having individual workspaces for investigators and a shared notebook with the ability to promote things you’re working on," explained Chang Kawaguchi, an AI security architect at Microsoft.

But it's the way Security Copilot incorporates OpenAI's GPT-4, that makes this tool capable of understanding natural language inputs.

With GPT-4, security professionals could use their own words to ask for a summary of a particular vulnerability, feed in files, URLs, or code snippets for analysis.

They can also request things, like asking for an alert information from other security tools.

While this tool is very capable, it doesn't mean that it's always right.

“We know sometimes these models get things wrong, so we’re offering the ability to make sure we have feedback,” explained Kawaguchi.

"It’s a little more complicated than that, because there are a lot of ways it could be wrong."

Users can respond to the AI by telling it what's wrong to make it understand the case better.

"I don’t think anyone can guarantee zero hallucinations, but what we are trying to do through things like exposing sources, providing feedback, and grounding this in the data from your own context is ensuring that it’s possible for folks to understand and validate the data they’re seeing," added Kawaguchi. "In some of these examples there’s no correct answer, so having a probabilistic answer is significantly better for the organization and the individual doing the investigation."

While Microsoft’s Security Copilot is similar to the ChatGPT-powered Bing chatbot, and that the technology behind it is also the same, Security Copilot is only designed to tackle security issues.

What this means, it can only respond to just security-related queries.

"This is very intentionally not Bing," said Kawaguchi. "We don’t think of this as a chat experience. We really think of it as more of a notebook experience than a freeform chat or general purpose chatbot."