Skip to content

TIPS #34: The AI Security Buyer Diaspora

Shane Shook

December 11, 2025

  • Blog Post
  • TIPS

Issue: AI buyers in the enterprise aren’t security buyers- but they should be.

The AI-ification of enterprise technology is indisputable. Research from Nudge Security in December 2025 identified over 1,500 unique AI tools across enterprise environments, with organizations using 39 unique AI tools on average.

Business units are adopting AI to support myriad workflows and functions in a decentralized process that creates a diaspora of AI buyers across the enterprise:

  • Product teams are buying AI copilots to streamline product development
  • Marketing leaders are signing up for AI-native SaaS to support content development and personalization
  • Developers are piloting AI coding assistants for automated code generation and bug prediction within Integrated Development Environments (IDEs)
  • Compliance is shopping for AI tools that automate GRC and regulatory change management

Securing AI across the enterprise is a challenge

From a cybersecurity perspective, this presents a multifaceted problem.

First, non-security business units buying and using AI do not tend to prioritize security. They are seeking to improve business outcomes and often adopt and use AI in risky ways, as we covered in TIPS #31. The scope of this issue is significant: a 2025 report from the National Cybersecurity Aliance found that 43% of workers admitted to sharing sensitive work information with AI tools without their employer’s knowledge, while 52% reported they have not received any training on the security or privacy risks of AI tools.

In addition, AI use isn’t always visible- even to business units themselves. While the use of AI is often elective, like the above examples, GenAI and copilot features are increasingly added to existing SaaS and enterprise platforms- think smart suggestions, AI summarization, and built-in copilots. This passive AI is often quiet and by default. Users may not even realize an AI model is touching their data, complicating risk assessments, cybersecurity, and regulatory compliance.

Further, many existing cybersecurity tools and controls are ill-equipped to secure AI, as we discussed in TIPS #16. There are new data security, access control, and attack surface considerations to grapple with, not to mention threats to AI models and integrations.

All of this leads to shadow AI and AI sprawl. AI tools and features are purchased and used in insecure ways without sufficient oversight or controls, creating risk blind spots and real consequences. The Cloud Security Alliance’s 2025 State of Cloud and AI Security report showed that 89% of organizations are using or piloting AI with 34% already suffering an AI-related breach due to rapid AI adoption without proper security controls.

Rethinking security for AI

The problem goes beyond risky AI use and tools that don’t adequately secure AI. The cybersecurity procurement model itself is not up to the task.

The typical centralized security buying model, where the CISO oversees a security stack and delivers security as a service that business units consume, is ineffective given how AI is bought. AI does not exist as a single platform that can be secured. It’s in the fabric of numerous SaaS tools, workflows, and business models across the organization. CISOs and CIOs are out of the loop when AI purchase decisions are happening and are forced to play catch-up to identify and secure AI use. In many cases, security teams never see AI data flows or model behaviors- and you can’t secure what you can’t see.

Given that AI buying is decentralized and has expanded the IT diaspora, we need to rethink the approach. When every app has AI and every team is experimenting with it, AI risk stops being a centralized, infrastructure-only concern. It becomes a product, workflow, and revenue concern- precisely where business unit leaders, CTOs, and heads of product operate. These AI buyers need to be the new security for AI buyers.

Impact: The gap between decentralized AI use and centralized AI security can lead to data leaks, poisoned models, and unmitigated risk.

The current misalignment between who buys AI and who buys security for AI leads to an expanding attack surface, shadow AI assets, and ungoverned workflows with insufficient access controls. This creates the conditions for data loss, insider leaks, and adversary targeting.

The following case studies demonstrate the cost of this gap.

“ConfusedPilot” and RAG Poisoning

In late 2024, researchers at UT Austin’s Spark lab led by Symmetry Systems CEO and co-founder Mohit Tiwari discovered “ConfusedPilot,” a method to manipulate AI responses in Retrieval Augmented Generation (RAG)-based AI systems without touching AI models directly. The researchers found that attackers could introduce a document with carefully crafted strings into the data environment, potentially forcing the AI to suppress facts, generate misinformation, or falsify attribution. Any user relying on manipulated responses- whether employees using AI to make business decisions or customers using enterprise AI chatbots- might act on the manipulated responses, causing a loss of business value, loss of trust, and liability.

ConfusedPilot shows a consequence of decentralized AI ownership. Without robust data sanitization, data security posture management (DSPM), or other key AI data security controls, a single malicious document could poison the AI model.

2025 Scale AI training data leak

In June 2025, Business Insider reported that Scale AI, a major AI data-labeling firm supporting Google, Meta and xAI, had stored thousands of documents with confidential AI training materials and contractor data in publicly accessible Google Docs. In some cases, the documents were also publicly editable. The data in the documents included confidential model training manuals, model weaknesses, and contractor information, and directly informed AI model training for major AI providers whose models are used by organizations globally.

While the exposed data has yet to culminate in a confirmed breach, this is an illustrative example of a company acquiring or using tools and workflows to reduce friction without sufficient security architecture, access controls, or IT and security visibility. It also shows how AI model providers and their customers can both be impacted by data leaks in the model training process.

Action: Map your organization’s AI buyer landscape, empower business units to use AI safely, and ensure strong AI security posture with well-orchestrated GRC.

AI buyers are already decentralized. AI discovery, control, and governance layers have to meet the new security for AI buyers where they are. The job of modern AI security leadership should be less about owning every purchase and more about orchestrating a complementary security posture across many buyers with shared discovery, runtime control, and governance.

1) Map the AI buyer landscape and collaborate with business units

Start by building a simple inventory of who is buying AI tools, who owns each AI-enabled application, and where budget authority sits. When approaching the security for AI problem, treat AI buyers as stakeholders in a shared AI governance council, not rogue actors that need to be shut down. Use your security expertise to help business units meet their AI use case objectives in a secure manner. Business units need to be informed security buyers as well as AI buyers.

Start by building a simple inventory of who is buying AI tools, who owns each AI-enabled application, and where budget authority sits. When approaching the security for AI problem, treat AI buyers as stakeholders in a shared AI governance council, not rogue actors that need to be shut down. Use your security expertise to help business units meet their AI use case objectives in a secure manner. Business units need to be informed security buyers as well as AI buyers.

2) Make SaaS & AI discovery table stakes

Continuously inventory SaaS and AI tools, including elective AI and passive AI features. This gives security and business units a shared, factual view of what exists: a comprehensive registry that includes both sanctioned tools and shadow AI usage. Nudge Security helps you discover AI applications in your organization and uncovers where AI is embedded and integrated across your entire SaaS ecosystem.

“Security teams can't protect what they can't see, and they can't solve a workforce problem without engaging the workforce in the solution.”

Russell Spitler Co-founder and CEO, Nudge Security

3) Secure AI like an endpoint with observability and guardrails

Adopt “EDR for AI”-like capabilities that provide observability, policy enforcement, and data leak prevention for prompts, completions, and AI agents across the estate. This should integrate with network, identity, and DLP controls rather than exist as another silo. WitnessAI provides real-time visibility, AI-native protection, and behavior-based governance for every AI model, application, and autonomous agent in your organization.

“AI is already here. AI governance isn’t about slowing down progress; it’s about gaining full visibility into how employees and business units use AI and establishing controls without hindering momentum.”

Rick Caccia Co-founder and CEO, WitnessAI

4) Anchor your efforts in GRC to avoid risk silos

Decentralized AI security buying can reproduce the classic problems of siloed risk management, leading to a higher likelihood of breaches. To avoid this issue, bring AI risk management into your enterprise GRC platform, map AI use cases to controls and frameworks, and ensure evidence collection includes business unit-owned AI initiatives. Hyperproof helps you streamline compliance operations, mitigate risks, and build trust with customers and stakeholders.

“GRC initiatives need to bridge the gap between security and business results, down to the business unit level. That transforms security and risk management from cost centers into capabilities that add value.”

Craig Unger Founder and CEO, Hyperproof

5) Create policies for elective and passive AI

Collaborate with business units to establish approved AI tools, data classification rules, and training programs to manage elective AI. To manage passive AI, require vendors to disclose AI use that impacts your data, ensure contracts address model training and retention, and treat AI features as in-scope for third-party risk assessments.

A final note for security vendors

Like enterprise security leaders, security vendors need to recognize and support business unit AI security buyers. It’s essential to build solutions that enable secure AI outcomes such as shipping product features, coding, and visibility into spending and compliance. Vendors should ensure their product development, GTM, and customer success efforts are designed to meet the unique business needs of AI buyers across the enterprise.