Skip to content

Margin of Safety #49: LiteLLM and Security for Non-Developers

Jimmy Park, Kathryn Shih

March 31, 2026

  • Blog Post

Vibe coding risks a painful new security blind spot when non-developer users inherit malicious dependencies.

Last week, while many folks focused on RSA, a different security story emerged: the LiteLLM supply chain compromise. It’s already widely discussed, so we’ll try to avoid repeating core facts. TL;DR: as part of a broader campaign, the threat actor TeamPCP uploaded malicious versions of LiteLLM to Python’s package repository, PyPI. These versions stole credentials and installed backdoors on affected systems. Because the library is frequently a transitive dependency — meaning automatically pulled by other tools — many users were surprised to be impacted.

But that’s assuming users knew to check. We had a conversation like this:

Kathryn: Hey Jimmy, you’ve been vibe coding a bunch in your free time. How much Python are you using? Cause you should maybe check if you somehow had the LiteLLM library pulled in and need to rotate all your credentials.

Jimmy: Python libraries what? I just tell Claude to make things and it does.[1]

A new frontier is opening for open source. Historically, engineers have accidentally used libraries in buried transitive dependencies. But they knew they were using external code; the only question was whether they knew all of their SBOM.[2] This intentionality left a thread (albeit sometimes a thin one) to follow for critical updates. Now, vibe coding is unleashing a wave of users who don’t realize they have an SBOM or third-party dependencies, full stop.

In this particular exploit, LiteLLM was specialized enough to reduce the risk of accidental direct adoption. Affected vibe coders were due to transitive dependencies of more mainstream tools like CrewAI, DSPy, or MLflow. But the same risk exists for the more general case: plenty of other open-source libraries will be pulled into fully vibed, zero-to-one efforts.

This has real security implications. How do you alert users to the need to patch (or rotate keys, or remove a persistent backdoor) if they don’t realize they’re users of anything except Claude? Jimmy is active in the security community on LinkedIn and saw the news. But plenty of vibe coders lack even that connection. And not only is it technically infeasible for open-source to enumerate its users, but package maintainers often lack resources for bulk outreach. This means the onus is on the user to know about critical updates, not on the open-source project to provide proactive notifications.

None of this is new in isolation. Engineers have always written one-off tools that never get patched. What’s changed is the scope: vibe coding is spreading among non-technical white-collar works and tools like Claude repeatedly use the same libraries. This brings new predictability and scale to the problem. Now, any attacker can observe a Claude environment, watch the packages it favors, and target those with the knowledge that they’re likely widely deployed by novice user.[3]

Given this expanding blast radius, what does the security community do about vibe coders who won’t know they’ve acquired a persistent backdoor? Leaving them to become an attack group’s super-botnet seems untenable. We think there are a few directions to push, all focused on safe-by-default.

One is sandboxing and other forms of workload isolation. Anthropic already supports sandboxed bash for Claude Code; we believe that full sandboxing be standard for coding agents, especially for developers for whom yolo’ing loosely vetted code or libraries into the OS is a particularly high-risk activity.[4]

Another is update and notification flows. Some tooling today manages dependency updates and warns on critical updates. But for the vibe coder, there’s no guarantee of ongoing attention to a tool’s code to trigger these flows. We believe tool developers and solution providers should be thinking about tracking and notification flows for vibe coders who consumed a bad dependency in passing, even if they’re no longer actively developing that project.

Finally, restricting users to heavily vetted, version pinned components. This is obviously not fool-proof; after all, the LiteLLM successfully attacked commercially supported security tools! But restricting fully uninformed users to a narrowed set of lower-risk packages until they know enough to remove the constraint is reasonable. Such capabilities must be default opt-ins; the users who benefit most won’t find a buried checkbox.

For readers in position to set security policy, we think this issue merits planning. Rapid vulnerability response should be top of mind for everyone; AI is rapidly expanding attacker exploit capabilities. But as organizations adopt AI, security teams should also be considering whether all code authors will connect the dots between their own efforts and vulnerability notifications. If they might not, security needs to ensure the right combination of preventative tooling and rapid remediation capabilities.

The LiteLLM incident is a preview of a structural problem that will only grow. Open source wasn’t designed for a world where its users don’t know they’re users — and the security practices built around it assume a level of technical engagement that vibe coders lack by definition. That’s not an indictment of vibe coders, but rather an observation that the open source community is relatively unprepared for their usage patterns. Jimmy zigs where open source expects a zag. The good news is that the gap is a product design opportunity as much as it is a risk. Sandboxing, smarter dependency management, curated package sets, and proactive notification flows are established engineering and UX patterns which tool builders can adopt today. The vibe coder isn’t going to develop an interest in SBOMs. But they don’t have to, if the tools around them are thoughtful enough to care on their behalf.

If you’re securing the vibe-coding use case for non-developers, please reach out as we’re very much interested!

If you’re building something in this space, feel free to reach out to jpark@forgepointcap.com and kshih@forgepointcap.com.

This blog is also published on Margin of Safety, Jimmy and Kathryn’s Substack, as they research the practical sides of security + AI so you don’t have to.

[1] It’s worth noting that users who pinned their dependency versions would have been protected since the malicious versions existed on PyPI for a short ~3 hour window. But version pinning is precisely the kind of deliberate dependency hygiene that Jimmy’s cohort of vibe coders is unlikely to know about, let alone practice. And as an ironic sidenote, the window might have been a lot longer had the attackers not shipped some suspiciously low quality, probably/maybe vibe coded functionality that ultimately led to their detection.

[2] And let’s be honest; they often didn’t.

[3]Anthropic isn’t releasing particularly useful public estimates here, but consensus estimates place overall Claude adoption at ~20M monthly active users. Anthropic has said that ~55% of users engage with Claude code for debugging tasks on a regular basis. We think a lower bound of 500k pure-vibe coders is probably a realistic guesstimate, and there could be significantly more.

[4]This could be true for multiple reasons: someone developing software within a sensitive enterprise context/network, someone who would be particularly challenged to clean up a malicious library, etc.