Skip to content

Margin of Safety #13 – Post-AI Security

Jimmy Park, Kathryn Shih

May 13, 2025

  • Blog Post

We’ve written a number of blogs about securing AI systems and about how AI might be used to improve security capabilities or workflows. But there’s a third category of AI investment: preparing and adapting to a post-AI security ecosystem.

Some parts of this preparation are already top of mind for practitioners and startups, such as deepfake readiness or dealing with increased volume and reduced latency on CVE exploitations. While some top-of-mind areas are already crowded – how many SOC automation startups did you count at RSA? – others seem to be flying under the radar. Let’s change that by looking at a few areas of classic security that might be further disrupted by AI. We’ll look at threat intel, patching and vulnerability management, and explore how AI might change the attacker landscape in ways that requires defenders to update their thinking.

AI-Enhanced Offensive Capabilities:

We expect attackers to realize similar AI productivity gains to defenders, but for these gains to manifest differently across varying capabilities. Thinking about how AI use cases translate to attacker capabilities, we already see that it’s empowering attackers to scale existing attack strategies to higher volumes. In the medium-term future, we believe attackers are likely to scale out additional AI-enabled capabilities.

1. Hyper-Personalized Reconnaissance and Scaled Social Engineering:

The first is improved reconnaissance, particularly to inform phishing, relationship, or human factors attacks on organizations. Semantic search and feature extraction makes it easy to scale personalization. That’s great for consumer products, but also powerful for adversaries trying to specifically tailor attacks for individual companies or employees. Deepfake powered spearfishing is top of mind for many (and why we invested in the excellent team at GetReal!), but we think this can go further. Better reconnaissance can render security questions, already a questionable practice, fully obsolete, necessitating new approaches for account recovery.

We anticipate AI will enable the creation and management of highly convincing, contextually aware fake personas, akin to the “Jia Tan” identity from the XZ Utils attack, but deployable in far greater numbers. Imagine a single adversary, armed with AI tooling, simultaneously creating and nurturing a fleet of such digital identities. While the long lead times required to establish trust will remain a factor, the sheer scale of these operations could become overwhelming. This level of sophisticated, AI-driven reconnaissance and persona generation will inevitably render traditional security questions, already a questionable practice, fully obsolete, necessitating entirely new paradigms for account recovery and continuous identity verification.

2. Automated and Adaptive Malicious Code Generation.

There’s been some talk of AI powered fully polymorphic malware with real-time, dynamic adaptation to defender actions. We don’t think this is real yet. Research PoCs of such systems have required real time access to powerful LLM endpoints. Even with LLM costs dropping another 100-1000x, attackers would struggle to sneak a gargantuan generative AI payload past a competent defender. Plus, don’t yet have solutions for true real time learning on Generative AI systems; context can be updated, but the core knowledge captured in model weights still requires offline training cycles to change or improve. That said, we can imagine a middle ground. Reports on Gen AI misuse already show examples of attackers using commercial LLMs to research long tail technical systems, presumably with the goal of attacking those systems.

The next step on this path is to use the software creation and translation capabilities of commercial models to generate malicious payloads on a per-target basis. This wouldn’t be true polymorphism, but rather AI’s automation and generation capabilities making the sort of target-specific toolkit generation that we’ve classically seen in nation state attacks on high-value targets into a mass market capability. The implication is that defenders must now prepare for newly up-leveled attacks.

3. Productivity improvements and decreased time to exploit

In addition to potentially enabling more advanced attacks, AI is accelerating the creation of standard ones. Major vendors are reporting that average Time-to-Exploit(TTE) has reduced from a historic 30+ days to under a week, with exploits sometimes appearing within 24 hours of a CVE’s publication. Much as “vibe coding” and AI-assisted development are improving velocities for legitimate engineering teams, these same productivity gains are being realized by attackers.

This acceleration, combined with AI’s capacity to automate and scale attack execution, means defenders are facing an ever-increasing deluge of threats. The old model of patching high-CVSS vulnerabilities within a 30-day window was already flawed; in a world where TTE can be measured in hours or a few days, it is entirely unworkable. This sheer volume and velocity is a major tailwind for SOC automation. Even if automation is imperfect, security teams, often already struggling with staffing, will have no choice but to automate more aggressively. While some, like Crowdstrike (whose recent layoffs might suggest a belief that automation gains are currently favoring defenders in SOC efficiency), are adapting, this is unambiguously a high-stakes productivity race. Every defensive organization must critically assess how they intend to win it.

AI-Enhanced Defensive Capabilities:

4. Threat Intelligence: From Reactive to Predictive

The traditional model of threat intelligence, often reliant on identifying and cataloging previously observed Indicators of Compromise (IOCs), is severely stressed by AI-empowered adversaries. Attackers can now more cheaply and rapidly generate novel attack infrastructure, code, and personas, significantly reducing the reuse of many traditional indicators. In this dynamic environment, knowing the threat status of previously observed entities becomes relatively less important than the ability to rapidly assess the disposition of entirely novel ones. Defensive postures must therefore evolve towards scaled evaluation tooling, requiring systems capable of rapidly evaluating anything from technical infrastructure and software binaries to alleged human identities, often with minimal prior information. They must also move towards predictive analysis, going beyond known threats to anticipating emergent attack vectors and methodologies, potentially leveraging AI to model and predict attacker behavior.

5. Vulnerability Management & Patching: Speed, Context, and Cost-Efficiency

While reachability analysis and contextual prioritization—determining if a vulnerability is actually exploitable in your specific environment—are valuable, they are not a panacea. Even non-invoked vulnerable capabilities can be exploited by attackers as part of broader campaigns. Prioritization must become more dynamic, continuous, and deeply integrated with real-time threat intelligence. Today, patching is expensive and disruptive, involving extensive testing, staged rollouts, and meticulous monitoring. This operational overhead limits the velocity and scope of patching. A significant win will come from tools and processes that reduce both the human and opportunity costs associated with deploying patches. If patches were near-free and risk-free, prioritization would be less critical; we could patch almost everything

In the extreme case, if patches were free, we wouldn’t need prioritization at all; the answer would be to patch everything. In practice, rollouts are unlikely to ever be free, so we expect a lasting need for prioritization tooling. But we believe savvy operators should be pushing on both sides of this equation; better prioritization to pick the most critical patches, and also smooth, low cost patching to allow for the largest possible set of fixes.

6. 0-days: Unpatchable Vulnerabilities

Beyond patching, if the trend towards faster exploits continues, defenders will need to increasingly rely on rapid mitigations for some classes of vulnerability. Over time, we may eventually get to a world in which AI can automatically drive creation of a full test suite, provision and manage test environments, and validate a fix in near real time. But that world is not in the immediate future, especially compared to the speed of exploits. This means defenders need to be prepare for an increasingly number of exploits for which no safe patch is available. In that world, the only option is to reduce short term risk by reducing the exposure of the vulnerability. This could mean adding firewall rules, reducing other ACLs, or otherwise isolating systems with a known vulnerability until the true fix can be identified, validated, and deployed. To this end, systems that can not only use contextual analysis to prioritize vulnerabilities but that can also extend that analysis to determine the cheapest, safest short term mitigation may prove invaluable.

Conclusion

AI isn’t just changing the tools we use to defend — it’s redefining the rules of engagement. As attackers adopt AI to move faster, scale wider, and operate with more precision, defenders will need to do more than just react. Classic security domains like threat intel and vulnerability management aren’t going away, but they are being forced to evolve into real-time, AI-augmented systems. The challenge ahead is less about detecting new threats, and more about keeping pace with how quickly those threats can now evolve. We met with several startups building here. Feel free to reach out if we haven’t met you yet!

Stay tuned for more insights on securing agentic systems. If you’re a startup building in this space, we would love to meet you. You can reach us directly at: kshih@forgepointcap.com and jpark@forgepointcap.com.

 

This blog is also published on Margin of Safety, Jimmy and Kathryn’s Substack, as they research the practical sides of security + AI so you don’t have to.