For users who’ve treated their $200/month Anthropic Claude or $250/month Google Gemini flat-rate accounts as an all-access pass to AI experimentation, the honeymoon may be over. Both companies are now banning accounts that authenticate OpenClaw—the viral agentic AI tool—through OAuth credentials, a method that lets users bypass token-based billing entirely.

The bans, which come without refunds or clear explanations, mark a sharp shift in how major AI providers view third-party integration. Unlike OpenAI, which has yet to enforce similar restrictions (possibly due to OpenClaw’s creator joining its ranks), Anthropic and Google are treating OAuth-driven OpenClaw usage as a violation of terms of service. For users, the fallout isn’t just inconvenient—it’s financially costly.

Why OAuth is the problem

OAuth, the authentication system behind Login with Google buttons, is being weaponized by OpenClaw to tap into flat-rate accounts. While the process isn’t illegal, it directly undermines the pay-as-you-go API model that Anthropic and Google rely on. Users can extract an OAuth token from Claude or Gemini—typically used for tools like Claude Code or Google’s Antigravity—then feed it into OpenClaw to unlock its agentic capabilities without token limits.

The issue isn’t OAuth itself; it’s the scale. OpenClaw’s agentic architecture demands far more computational resources than traditional chatbots. A single session can burn through millions of tokens—whereas ChatGPT might use 2,000 tokens for a conversation, OpenClaw can devour 30,000 or more for basic interactions. For flat-rate users, that’s free access to a tool designed for API-based billing.

Google and Anthropic’s stance: ‘Not as intended’

ram memory module

Google DeepMind has framed the crackdown as a response to ‘malicious usage’ degrading service quality. In a statement, the team acknowledged that while some users may have been unaware of the restrictions, the surge in OAuth-driven traffic disrupted core services for legitimate customers. Anthropic hasn’t issued a public explanation, but the bans suggest a similar stance: flat-rate accounts aren’t meant to power third-party tools.

OpenClaw’s creator has hinted at removing Google Antigravity support in response, though OpenAI’s recent hiring of the developer may shield ChatGPT users from similar actions—for now. The inconsistency highlights a fragmented approach: where one provider tolerates (or even embraces) a tool, others see it as a threat to their business model.

What’s next for OpenClaw users?

  • API is the only safe path. Both Anthropic and Google explicitly endorse their API for third-party use, complete with token-based billing. Users who want to keep OpenClaw running should migrate to API keys—though bills may climb unexpectedly.
  • Banned? Create a new account. Google and Anthropic aren’t disabling core services (Gmail, Drive, etc.), but AI account bans are permanent without appeals. Some users report success with fresh accounts, though repeated violations could trigger broader restrictions.
  • OpenClaw’s future hinges on OpenAI. With its creator now at OpenAI, the tool’s trajectory depends on how ChatGPT’s policies evolve. If OpenAI adopts a permissive stance, others may follow—but for now, Claude and Gemini users are on their own.

The bans serve as a warning: in the AI arms race, flat-rate plans aren’t a blank check. For those who’ve grown accustomed to treating their $250/month Gemini or $200/month Claude subscriptions as a playground, the rules just changed—and the playground door may be closing.