Skip to content

Margin of Safety #39: Next Gen IGA

Jimmy Park, Kathryn Shih

January 14, 2026

  • Blog Post

Why do security practitioners often say IGA is broken?

*Special thanks to Dr. Shane Shook (aka Yoda of Cybersecurity) for his valuable insights and collaboration in this post!

Identity Governance and Administration (IGA) has been a large category for many years. Enterprises have spent real money on it. And yet, if you talk to practitioners, a strikingly consistent complaint emerges: basic identity events still run on tickets, emails, and spreadsheets.

There are myriad examples of workflows that IGA was supposed to solve but that remain far more manual and brittle than we’d expect from ‘solved’ problems: onboarding and off boarding, managing role changes and reorganizations, etc. This persists despite broad IGA adoption. We believe the root problem is that IGA fails to match how access actually works.

Where Traditional IGA Breaks Down

The most obvious failure mode is visibility. Modern access is fragmented across hundreds of SaaS apps, multiple cloud providers, internal tools, and custom systems. Each environment defines permissions differently, causing fragmentation in both the source of truth and also the core language used for permissions. Legacy IGA systems ingest this sprawl but fail to normalize it; this puts the onus on the user to reason about an impossible number of permissions in complex environments[1].

The downstream effects are predictably bad. 1) Roles become brittle or meaningless, 2) certifications surface permissions without explaining what they enable(or the deeper risks of that enablement), and 3) governance looks comprehensive while remaining shallow.

Even when users can get past these issues, IGA tools still have the core problem that they don’t provide a path for effective remediation. Typically, legacy IGA tools are approval orchestrators. They route decisions, but they struggle to enact precise changes across systems, especially when access in one tool implies access elsewhere. Fixing excessive or risky access devolves into coordination across teams, tickets, and manual follow-ups. Orphaned access persists long after it’s been flagged, quietly extending the attack window.

Access reviews are meant to compensate for these gaps. Instead, they have become largely ineffective. Reviewers are presented with raw permissions (groups, roles, policy names) without context about usage, data sensitivity, or accumulated risk. The outcome is often rubber-stamping at scale where access reviews become compliance artifacts.

The Shift to Next Gen IGA: From Periodic Reviews to Continuous, Risk-Based Governance

What does Next Gen IGA entail?

First, unified visibility across all systems and all identities — SaaS, cloud, on-prem, and custom applications. This increasingly includes both human and non-human identities. Whether those identities should be governed identically is an open question. Humans accumulate access organically as roles evolve. Non-human identities (service accounts, workloads, agents) tend to be task-specific and deterministic. Policy may differ, but the entitlement model underneath should not.

Second, governance moves from groups and roles to effective permissions with context. That means understanding how access was granted, what data it touches, how it’s used, and how risky it is in aggregate. Permissions are evaluated not as static objects, but as evolving relationships tied to business needs and policy over time. Taken to its logical conclusion, this implies not just reasoning about the risk of an access grant, but eventually reasoning about the surface area of all access a user has. For example, given sufficiently broad access, users might be able to start extrapolating highly sensitive data in undesirable ways. In this case, no single grant is problematic, but the sum total introduces novel risk.

Lastly, automation becomes possible with AI on guardrails. While some approvals require nuanced consideration, some have obvious answers. These clear extremes provide a natural starting point for selective, lower risk automation, while systems continue to loop in human reviews and escalation points for more nuanced cases. Much like with SOC automation, we expect this to simultaneously reduce latencies on core workflows (by automating simple cases) while allowing organizations to better leverage expensive human review processes by focusing them on cases where judgement is actually required.

A Note on Agents and Non-Human Identity

This shift to next gen IGA doesn’t necessarily require a separate IGA for agents. The right answer is extending entitlement intelligence to non-human identities on the same foundation. The complication, however, is ownership. Human identities live with IT, HR, and compliance. Non-human identities are often owned by engineering.

The Limits of Unified Governance

As non-human identities become first-class citizens in modern environments, it is tempting to assume that identity governance workflows can be fully consolidated across human and non-human access. Unified visibility, richer context, and AI-driven analysis appear to make this possible.

In practice, this assumption breaks down. The limitation is not tooling maturity or data coverage, rather it is that risk understanding itself is distributed across domains.

Engineering teams can describe the technical blast radius of modifying or revoking a service account: pipelines stall, retries cascade, downstream systems degrade. Business leaders can describe the operational and financial consequences of restricting human access: revenue analysis breaks, audits are delayed, forecasting and compensation processes are disrupted. Critically, neither group fully understands the other’s risk domain, even when impacts can be clearly enumerated.

AI can increasingly explain what will break when access changes on either side. What it cannot do is assign business meaning(or ownership) to those failures. No single individual or function has the authority or context to evaluate whether a given failure is acceptable when consequences span technical reliability, regulatory exposure, and business outcomes simultaneously.

This creates a structural limit for IGA. While entitlement intelligence can and should be unified across human and non-human identities, risk acceptance cannot be centralized. Ownership becomes partial, accountability fragmented, and approvals increasingly represent forced decisions rather than informed judgments. Next gen IGA must therefore focus less on enforcing singular decisions and more on surfacing cross-domain impact so risk can be acknowledged and managed where it actually resides.

Governance Has a Hard Limit

The persistent frustration practitioners express with IGA is often framed as a failure of execution: insufficient visibility, brittle roles, weak automation, or ineffective reviews. While these issues are real, they point to a deeper constraint. Identity governance is limited not just by what systems can see or enforce, but by what organizations can meaningfully evaluate.

Modern access spans humans, service accounts, workloads, agents, and automated systems, each embedded in different operational and business contexts. Access decisions are binary, but the risk they represent is multi-dimensional and non-linear. Even with complete visibility and AI-generated explanations, no single team can assess identity risk across all dimensions with confidence.

This becomes most evident when governance attempts to consolidate human and non-human identity workflows. Reviewers may understand technical failure modes or business process impact, but rarely both. Approval workflows force binary decisions in situations where no approver has full situational awareness, turning governance into a compliance artifact rather than a true control.

The implication for next gen IGA is not that consolidation is wrong, but that centralized authority is an illusion. Effective governance does not come from a single workflow, owner, or decision point. It comes from acknowledging that identity risk is inherently distributed and designing systems that continuously surface impact, quantify exposure, and enable federated risk ownership rather than pretending unified judgment exists.

IGA feels broken not because enterprises failed to implement it, but because it was built on an assumption that no longer holds: that identity risk can be fully understood, evaluated, and owned by a single function. Next gen IGA succeeds by designing for that reality, not by trying to overcome it.

Conclusion

IGA feels broken not because enterprises ignored it, but because access has changed faster than governance models have adapted. As identities diversify and permissions fragment, we argue that governance should shift from periodic, role-based administration to continuous, risk-based decision-making. In our market map above, we highlight some emerging next gen players in this space. If you’re building something in this space, feel free to reach out to jpark@forgepointcap.com and kshih@forgepointcap.com.

This blog is also published on Margin of Safety, Jimmy and Kathryn’s Substack, as they research the practical sides of security + AI so you don’t have to.

[1] We dare you to try to enumerate and define all the permissions used in a modern cloud environment without resorting to ChatGPT or Google.