On a Saturday in December 2025, Anthropic quietly updated its acceptable use policy to prohibit third-party applications from using OAuth to authenticate as users of Claude.ai. The policy update went live without a developer announcement, without email notification, and without a grace period for existing integrations.
By Monday morning, tools with a combined GitHub star count exceeding 56,000 — Claude integrations for VS Code, browser extensions, productivity apps, and developer workflow tools — were broken. Their OAuth flows returned authentication errors. Their users received no explanation. The tools' authors had no warning this was coming.
This is an analysis of what happened, why it happened, and what it means for any team that builds on Anthropic's products.
The Technical Mechanism
Many third-party Claude integrations — particularly those that predated Anthropic's official API or that targeted Claude.ai's features not yet available in the API — authenticated using OAuth flows against claude.ai rather than against the official developer API with an API key. This allowed the tools to access Claude's interface as an authenticated user, unlocking capabilities like Projects, artifacts, and the full context window without waiting for those features to be available via the official API.
From Anthropic's perspective, this approach created risks: it made usage attribution impossible, it bypassed rate limiting and usage controls, it created security exposure for user accounts, and it consumed capacity that was allocated for direct users. The policy change that banned OAuth authentication was a rational response to these risks.
From developers' perspective, the tools they had built and their users had come to depend on stopped working without warning. The rationality of Anthropic's decision was not visible to them on Monday morning when their GitHub issues started filling with bug reports.
The Documentation Timeline
This is the fact that matters most for how you architect your AI vendor dependencies: Anthropic's formal documentation of the OAuth restriction — the update to its published acceptable use policy with specific language about third-party OAuth flows — appeared six weeks after the enforcement began.
For six weeks, developers trying to understand why their Claude integrations were failing found no official explanation in Anthropic's documentation. The information existed only in Discord posts and forum threads where Anthropic staff explained the change informally. The official record — the policy document that serves as the authoritative source for what is and is not permitted — reflected the old state of affairs.
This six-week gap is not an anomaly. It is a structural characteristic of how policy decisions get made and documented at AI companies operating at speed. The people who decide to restrict a capability and the people who document that restriction are often in different organizational functions operating on different timelines. The decision happens fast because it needs to. The documentation follows at its own pace.
What This Means for Your Dependency Model
If you build on Anthropic — or any major AI provider — the Anthropic OAuth ban has a direct implication for how you should think about your integration. Your dependency is not just on the features the provider currently offers. It is on the provider's continued willingness to allow you to use those features in the way you currently use them.
That willingness is not expressed primarily in product announcements or release notes. It is expressed in Terms of Service, acceptable use policies, and API usage policies that change without the ceremony of a product launch. Anthropic did not announce the OAuth restriction as a product change. It made a policy change that happened to remove a capability that thousands of developers depended on.
The semantic distinction matters enormously for monitoring. If you only watch product announcements and model releases, you will miss policy changes entirely. Policy documents are the governing layer above product features, and they change on their own schedule.
The Cost Distribution
The cost of the Anthropic OAuth ban distributed across three populations in very different ways.
For individual developers maintaining open-source Claude integrations as side projects, the cost was primarily reputational and emotional: their users were suddenly experiencing broken tools, they had no explanation to give, and they faced a choice between a significant re-architecture effort or abandoning the project. Many chose abandonment. Tools with thousands of users went unmaintained overnight.
For small teams that had built commercial products on top of Claude.ai OAuth flows, the cost was existential: their product had stopped working, their revenue was immediately at risk, and the path to recovery required either migrating to the official API (with its different feature set and cost structure) or building on a different provider. Both paths required weeks of engineering work.
For enterprises using third-party Claude integrations as part of their AI stack, the cost was a sudden capability gap and an IT incident. The integrations their teams depended on for daily workflows stopped functioning, and the procurement and compliance processes required to onboard replacement tools are not fast.
What Mardii Detected and When
Mardii's policy surveillance system hashes Anthropic's acceptable use policy daily. The hash change that corresponded to the OAuth restriction was detected on the day the policy update was published — six weeks after enforcement began. Before that date, the behavioral change was detectable through Mardii's API health monitoring: OAuth authentication endpoints started returning error patterns consistent with intentional restriction rather than technical failure.
Mardii subscribers received a Breaking severity alert on the day of detection, with the specific policy section that changed, a plain-language summary of what the change meant for third-party integrations, and the enforcement date derived from community reports in the gap between enforcement and documentation.
The broader lesson is that policy surveillance and API behavioral monitoring together provide earlier signal than either alone. Mardii applies both to every monitored provider, every day.
The Pattern Repeats
The Anthropic OAuth ban is one instance of a pattern. OpenAI's plugin policy underwent similar restrictions in 2025 without the ceremony of a major announcement. Google's Gemini API terms were updated to restrict certain commercial use cases in ways that affected teams who had built products on the assumption of different terms. Mistral updated its data retention policy in a way that created GDPR compliance implications for European customers — changes that appeared in the policy document but not in any developer communication.
You cannot prevent providers from changing their policies. You can be the first to know when they do. Start monitoring at mardii.com — free.