AI in Cybersecurity: How Generative AI Fits Together
Where is the integration of generative AI in cybersecurity solutions? A look at the initiatives of around ten suppliers. Generative AI in cybersecurity solutions, what for? For most providers, communication encompasses the phases of research, analysis, and response to threats. Zscaler is no exception to this approach with its Security Autopilot feature, which it says it is testing internally. However, it addresses more specific angles.
On the one hand, the documentation, with the promise of a chatbot-type interface for finding out about product features. On the other hand, the DLP. In this regard, generative AI should extend analysis beyond text and images towards audio and video. At Checkmark, we communicate in particular on an “AI Guided Remediation” brick, which should allow, within IDEs, to obtain explanations on poor IaC and API configurations.
Another component highlighted is the “AI Query Builder,” which is intended to automate the writing of SAST rules. Both are based on GPT-4. Veracode also uses generative AI in Cybersecurity in automation logic… but for the design of patches, initially in Java and C#. The underlying model is also from the GPT family. We are told of general availability for this month on the command line interface.
Orca Security has also chosen the GPT family (Davinci model) to enrich its platform from one angle in particular: improving the accuracy of remediation advice. The resulting product is currently in demo form. It uses the OpenAI API to generate recommendations from pre-filtered security alerts. Orca provides several operating surfaces: CLI, console, and IaC templates. It is a question of coupling a conversational interface to it.
Charlotte AI, Purple AI, Security Copilot: Everyone Has Their Own “Detection And Response Assistant.”
On Google’s side, we use, in particular, large language models within the Assured Open Source Software service to improve the analysis of packages. We also have a “special cyber” derivative of PaLM in the pipeline, which will power Mandiant’s solutions under the research-analysis-response triptych. Google is also a cloud branch that provides generative AI in Cybersecurity services under the Vertex AI banner.
Cohesity is one of the publishers who have decided to use it to help secure backups. The same goes for Sysdig, which intends to power its CNAPP platform. Generative AI is also a future at Palo Alto Networks. Its objective is to offer, this year, detection and prevention services based on an in-house LLM. Which should also, more generally, help improve the experience of using its products…
SentinelOne is in the beta phase, in a restricted circle, with its Purple AI add-on for the Singularity platform. The underlying mixes open source and proprietary models, we are told. Primary audience: SOCs to support them in all of their activities around a natural language interface backed by the supplier’s data lake.
Conclusion
The emphasis is placed, among other things, on the ability to summarize threats. CrowdStrike has also found its reference brand: Charlotte AI. Also, in a limited preview, this “cyber analyst based on generative AI” is based on the Falcon platform. With, at its functional heart, the same principle of conversational interface.
The development of Charlotte AI is based on a partnership with AWS – and its LLM Bedrock. Preview also for Security Copilot, which Microsoft made official at the end of March. It is a detection and response assistant based on a GPT model… but not only that. The prompts are actually not transmitted directly to the LLM. They first pass through a specialized model, which refines them based on signals from threat intelligence feeds and various Microsoft products. The whole thing is embodied in a discussion interface.
Read Also: What Are The Best ChatGPT Plugins To Try In 2024?