Issue: Companies misunderstand, mistrust, and misevaluate AI hyperautomations.
Historically, Security Operations Centers (SOCs), incident response teams, red teams, and other cybersecurity functions have relied upon a combination of manual analysis, deep subject matter expertise, and iterative hands-on processes to identify, analyze, and respond to threats. Even as supported by modern analytics tools, these processes are time-consuming, resource-intensive, costly, and limited in scalability. Today, this approach falls short given constant technological changes and evolving cyber threats.
Automation in Security
Unsurprisingly, business process automations (BPAs)– software that automates repetitive business tasks- and robotic process automations (RPA)– a type of BPA that uses configurable scripts (bots) to automate tasks- have become a staple for modern organizations. Those capabilities have been adopted in some security processes, and many security processes have thus been reengineered to create incremental time and cost improvements. For example:
- Scripts that collate vulnerabilities scan details and patch/change requirements
- Alert detection scripts that leverage new attack matrix details
- Security Information and Event Management (SIEM) solutions that automate event log collection, normalization, correlation, and apply patterns to create alerts for action
- Security Orchestration, Automation and Response (SOAR) tools that help security teams automate and manage SOC operations
Process automations can be combined with AI techniques including machine learning (ML) and neural networks (a subset of ML) for enhanced outcomes. For example:
- Intelligent SOC workflow discovery and automation
- Alert prioritization, advanced threat detection, and actionable insights
- Controlled access and authorized use of resources
Hyperautomation
Hyperautomation goes one step further by automating complex processes across systems, by leveraging AI to orchestrate workflows and support intelligent decisions. For example:
- AI identity and access management that automates onboarding, identity validation, rights management, and access controls to resources across disparate systems
- AI SOCs that hyperautomate operational aspects of a security program including extemporaneous threat detection, incident response, and retrospective querying of retained data as new threat intelligence is gained – from operations, or the community, to create a continuous threat awareness and response program
- AI copilots that enable automated code reviews, continuous AppSec testing, and other complex DevSecOps functions in the software development life cycle (SDLC), enhancing developer productivity and quickly delivering resilient applications to market
While AI hyperautomations have serious potential, many companies aren’t effectively capitalizing on the opportunity – due to fear, uncertainty, and doubts (FUD) about its reliability and TCO vs ROI.
One source of fear is FOMO- fear of missing out on GenAI, rather than focusing on their needs and capabilities with existing AI solutions available to hyperautomate their work processes – that might lead to efficient GenAI outcomes. Many organizations get caught up in the hype around the latest and greatest AI innovations without stepping back to consider what the best use of AI is for their specific use case. For example, while many GenAI security innovations- including LLM-based tools- are innovative and promising, they can be prohibitively expensive at scale due to high GPU costs and other resource demands. This is particularly true for mid-market and middle management buyers with limited budgets.
The truth is that ML, neural networks, and other proven decades-old AI technologies are often much more practical for hyperautomation in security. They tend to deliver market-proven outcomes, benefit from economies of scale, offer a cheaper price point, and can be more trustworthy, transparent, and explainable.
Unfortunately, many people assume that the shortcomings of GenAI apply to AI more generally. While GenAI lacks widespread trust due to documented hallucinations, opaque explainability, and concerns around security vulnerabilities, more established AI solutions are usually highly observable and developed around a defined set of tasks. Unfounded mistrust in proven and transparent AI-powered solutions causes many security teams to pursue the wrong kinds of AI, avoid beneficial AI implementations, or mistrust trustworthy AI technologies.