They cannot actually do this as long as they keep Claude code open source. It is always going to be trivial to replicate how it sends requests in a third party tool.
They may however be obligated to not give customers access to their services at a discounted rate either - predatory pricing is at least some of the time and in some jurisdictions illegal.
Are you suggesting Anthropic has a “duty to deal” with anyone who is trying to build competitive products to Claude Code, beyond access to their priced API? I don’t think so. Especially not to a product that’s been breaking ToS.
Obviously Anthropic are within their rights to do this, but I don’t think their moat is as big as they think it is. I’ve cancelled my max subscription and have gone over to ChatGPT pro, which is now explicitly supporting this use case.
Is opencode that much better than Codex / Claude Code for cli tooling that people are prepared forsake[1] Sonnet 4.5/Opus 4.5 and switch to GPT 5.2-codex ?
The moat is Sonnet/Opus not Claude Code it can never be a client side app.
Cost arbitrage like this is short lived, until the org changes pricing.
For example Anthropic could release say an ultra plan at $500-$1000 with these restrictions removed/relaxed that reflects the true cost of the consumption, or get cost of inference down enough that even at $200 it is profitable for them and they will stop caring if higher bracket does not sell well, Then $200 is what market is ready to pay, there will be a % of users who will use it more than the rest as is the case in any software.
Either way the only money here i.e. the $200(or more) is only going to Anthropic.
[1] Perceived or real there is huge gulf in how Sonnet 4.5 is seen versus GPT 5.2-codex .
The combination of Claude Code and models could be a moat of its own; they are able to use RL to make their agent better - tool descriptions, reasoning patterns, etc.
Are they doing it? No idea, it sounds ridiculously expensive; but they did buy Bun, maybe to facilitate integrating around CC. Cowork, as an example, uses CC almost as an infrastructure layer, and the Claude Agent SDK is basically LiteLLM for your Max subscription - also built on/wrapping the CC app. So who knows, the juice may be worth the RL squeeze if CC is going to be foundational to some enterprise strategy.
Also IMO OpenCode is not better, just different. I’m getting great results with CC, but if I want to use other models like GLM/Qwen (or the new Nvidia stuff) it’s my tool of choice. I am really surprised to see people cancelling their Max subscriptions; it looks performative and I suspect many are not being honest.
This is definitely Barbara Streisanding right now. I had never heard of OpenCode. But I sure have now! Will have to check it out. Doubt I’ll end up immediately canceling Claude Code Max, but we’ll see.
agreed. This is definitely free PR for OpenCode. I didn't try it myself until I heard the kerfuffle around Anthropic enforcing their ToS. It definitely has a much nicer UX than claude-code, so I might give the GPT subscription a shot sometime, given that it's officially supported w/ 3rd party harnesses, and gpt 5.2 doesn't appear to be that far behind Opus (based on what other people say).
i've been on claude code since before they even HAD subscriptions (api only) and since getting max from day 1 - I haven't once have assumed that access was allowed outside of CC. anyone who thinks otherwise is leaning into that cognitive dissonance
You can't control it to the level of individual LLM requests and orchestration of those. And that is very valuable, practically required, to build a tool like this. Otherwise, you just have a wrapper over another big program and can barely do anything interesting/useful to make it actually work better.
What can't you do exactly? You can send the Claude binary arbitrary user prompts—with arbitrary custom system prompts—and get text back. You can then put those text responses into whatever larger system you want.
wow. ACP is used within zed so I guess zed is safe with ACP using claude code
I wonder if Opencode could use ACP protocol as well. ACP seems to be a good abstraction, I should probably learn more about it. Any TLDR's on how it works?
According to Opus, ACP is designed specifically for IDE clients (with coding agent “servers”), and there’s some impedance mismatch here that would need to be resolved for one agent cli to operate as a client. I havent validated this though.
—-
1. ACP Servers Expect IDE-like Clients
The ACP server interface in Claude Code is designed for:
∙ Receiving file context from an IDE
∙ Sending back edits, diagnostics, suggestions
∙ Managing a workspace-scoped session
It’s not designed for another autonomous agent to connect and say “go solve this problem for me.”
2. No Delegation/Orchestration Semantics in ACP
ACP (at least the current spec) handles:
∙ Code completions
∙ Chat interactions scoped to a workspace
∙ Tool invocations
It doesn’t have primitives for:
∙ “Here’s a task, go figure it out autonomously”
∙ Spawning sub-agents
∙ Returning when a multi-step task completes
3. Session & Context Ownership
Both tools assume they own the agentic loop. If OpenCode connects to Claude Code via ACP, who’s driving? You’d have two agents both trying to:
∙ Decide what tool to call next
∙ Maintain conversation state
∙ Handle user approval flows
It’ll be interesting to see how far they take this cat and mouse game. Will “model attestation” become a new mechanism for enforcing tight coupling between client and inference endpoint? It could get weird, with secret shibboleths inserted into model weights…
Switching models is too easy and the models are turning into commodities. They want to own your dev environment, which they can ultimately charge more when compared to access to their model.
I think the focus on OpenCode is distorting the story. If any tool tried to use the CC API instead of the regular API they’d block it.
Claude Code as a product doesn’t use their pay per call API, but they’ve never sold the Claude Code endpoint as a cheaper way to access their API without paying for the normal API
When using their web UI with Firefox and ublock origin it regularly freezes the tab when the answer is written out. Someone at Anthropic had to create a letter-by-letter typing animation with GIF image and sentry callbacks every five seconds, which ends up in an infinite loop.
I've seen reports about this bug affecting Firefox users since Q3 2025. They were reported over various channels.
Not a fan of them prioritizing the combat against opencode instead of fixing issues that affect paying users.
It also happens with extensions and Firefox adblocker disabled. Might be connected to one of the Firefox anti tracking features, but I was unable to figure it out. The profiler shows an infinite loop.
I've found several reports about this issue. Seems they don't care about Firefox.
While Anthropic can choose whatever tool uses their api or subscription but I never fully understood what they gain from having the subscription explicitly only work for claude code. Is the issue that it disincentivizes the use of their API?
They gave Claude Code a discount to make it work as a product.
The API is priced for all general purpose usage.
They never sold the Claude Code endpoint as a cheaper general purpose API. The stories about “blocking OpenCode” are getting kind of out of hand because they’d block any use of the Claude Code endpoint that wasn’t coming from their Claude Code tool.
Owning the client gives them full control over which model to use for which query, prompt caching, rate limiting and lots more. So they can drive massive savings for the ~same output over just giving unrestricted access to the API.
The issue is that claude code is cheap because it uses API's unused capacity. These kind of circumventions hurt them both ways, one they dont know how to estimate api demand, and two, the nature of other harnesses is more bursty (eg: parallel calls) compared to claude code, so it screws over other legit users. Claude code very rarely makes parallel calls for context commands etc. but these ones do.
re the whole unused capacity is the nature of inference on GPUs. In any cluster, you can batch inputs (ie takes same time for say 1 query or 100 as they can be parallelized) and now continuous batching[1] exists. With API and bursty nature of requests, clusters would be at 40%-50% of peak API capacity. Makes sense to divert them to subscriptions. Reduces api costs in future, and gives anthropic a way to monetize unused capacity. But if everyone does it, then there is no unused capacity to manage and everyone loses.
Have had max for awhile, funny thing opencode still sorta works with my cc max subscription. That said after awhile open code just hangs. My workflow involves saving state frequently. I cancel open back up and continue then it’s performant for maybe 2-3 token context windows, repeat
you can get around this by making an agent in opencode and that agent should not mention opencode at all, e.g. "You're an agent that uses Claude Opus..." and it will just work.
While the subscription is definitely subsidized (technically cross-subsidized, because the subsidy is coming from users who pay but barely use it), Claude Code also does a ton of prompt caching that reduces LLM dependency. I have done many hours-long coding sessions and built entire websites using the latest Opus and the final tally came to like $4, whereas without caching it would have been $25-30.
Are you saying CC does caching that opencode does not? What does Anthropic care? They limit you based on tokens, so if other agents burn more then users will simply get less work done, not use more tokens, which they can't. I don't think Anthropic's objection is technical.
Cry me a river - I never stop hearing how developers think their time is so valuable that no amount of AI use could possibly not be worth it. Yet suddenly, paying for what you use is "too expensive".
I'm getting sick of costs being distorted. It's resulting in dysfunctional methodologies where people are spinning up ridiculous number agents in the background, burning tokens to grind out solutions where a modicum of oversight or direction from a human would result in 10x less compute. At very least the costs should be realised by the people doing this.
Well, they are paying. Just not for the product Anthropic wants to sell. Really at root this is a marketing failure. They really, really want to push Claude CLI as a loss leader, and are having to engage in this disaster of a anti-PR campaign to plug all the leaks from people sneaking around.
The root cause is and remains their pricing: the delta between their token billing and their flat fee is just screaming to be exploited by a gray market.
I believe LLM providers should ultimately be utilities from a consumer perspective, like water suppliers. I own the faucet, washer, bathtub, and can switch suppliers at will. I’ve been working on a FOSS client for them for nearly three years.
I hope that why the following is purely a factual distinction, not an excuse or an attempt to empathize.
The difference between the other entities named and OpenCode is this:
OpenCode uses people’s Claude Code subscriptions. The other entities use the API.
Specifically, OpenCode reverse‑engineers Claude Code’s OAuth endpoints and API, then uses them. This is harmful from Anthropic's perspective because Claude Code is subsidized relative to the API.
Edit: I’m getting “You’re posting too fast” when replying to mr_mitm. For clarity, there is no separate API subscription. Anthropic wants you to use one of two funnels for coding with their LLMs:
1. The API (through any frontend), or
2. A subscription through an Anthropic‑owned frontend.
You're hitting an important point. I might go on a tangent here.
It's up to operating systems to offer a content consumption experience for end users which reverses the role of platforms back to their original, most basic offers. They all try to force you into their applications which are full of tracking, advertisements, upsells, and anti-consumer interface design decisions.
Ideally the operating system would untangle the content from these applications and allow the end user to consume the content in a way that they want. For example Youtube offers search, video and comments. The operating system should extract these three things and create a good UI around it, while discarding the rest. Playlists and viewing history can all be managed in the offline part of the application. Spotify offers music, search and lyrics but they want you to watch videos and use social media components in their very opinionated UIs, while actively fighting you to create local backup of your music library.
Software like adblockers, yt-dlp and streamlink are already solving parts of these issues by untangling content from providers for local consumption in a trusted environment. For me the fight by Anthropic against OpenCode fits into this picture.
These companies are acting hostile even towards paying customers, each of them trying to build their walled gardens.
It wasn't just hooking up a new faucet. It was hijacking an API key intended for ClaudeCode specifically. So in this metaphor it would be hooking up a secondary water pipe from the water company intended only for sprinklers they provide to your main water supply. The water company notices abnormal usage coming from the sprinkler water pipe and shuts it off, while leaving your primary water pipe alone.
Possibly a better comparison (though a bit dated now) would be AT&T (or whatever telephone monopoly one had/has in their locality) charging an additional fee to use a telephone that isn't sold/rented to them by AT&T.
Fwiw, your main point seems scattered across your post where sentences refer to supposed context established by other sentences. It's making it hard to understand your position.
Maybe try the style where you start off with your position in a self-contained sentence, and then write a paragraph elaborating on it.
It's exactly like water. Use their API, and you pay as much water as you drink. But visit them in their pub, and you get a pretty big buffet with lots of water for a one-time price.
Please stop spreading this nonsense. Anthropic is not blocking Opencode. You can use all their models within Opencode using API. Anthropic simply let Dax and team use unlimited plans for the past year or so. I don’t even know if it was official. I find this a bit comical and immature. You want to use the models, just pay for it. Why are people trying to nickel and dime on tools that they use day in day out?
You can clearly run the provided gist. Calling “You are OpenCode” in the system prompt fails, but not if you replace the name with another tool name (e.g. “You are Cursor”, “You are Devin”). Pretty blatant difference in behavior based on a blacklisted value.
I do not understand the stubbornness with wanting to use the auth part. On local, just call the claude code from your harness, or better there is a claude agent sdk, both of which have clear auth and are permitted acc to anthropic. But to say that they want to use this auth as a substitution for API is a different issue altogether.
I get it though, Anthropic has to protect their investment in their work. They are in a position to do that, whereas most of us are not.
Viewed another way, the preferential pricing they're giving to Claude Code (and only Claude Code) is anticompetitive behavior that may be illegal.
They’re not obligated to give other companies access to their services at a discounted rate.
The moat is Sonnet/Opus not Claude Code it can never be a client side app.
Cost arbitrage like this is short lived, until the org changes pricing.
For example Anthropic could release say an ultra plan at $500-$1000 with these restrictions removed/relaxed that reflects the true cost of the consumption, or get cost of inference down enough that even at $200 it is profitable for them and they will stop caring if higher bracket does not sell well, Then $200 is what market is ready to pay, there will be a % of users who will use it more than the rest as is the case in any software.
Either way the only money here i.e. the $200(or more) is only going to Anthropic.
[1] Perceived or real there is huge gulf in how Sonnet 4.5 is seen versus GPT 5.2-codex .
Are they doing it? No idea, it sounds ridiculously expensive; but they did buy Bun, maybe to facilitate integrating around CC. Cowork, as an example, uses CC almost as an infrastructure layer, and the Claude Agent SDK is basically LiteLLM for your Max subscription - also built on/wrapping the CC app. So who knows, the juice may be worth the RL squeeze if CC is going to be foundational to some enterprise strategy.
Also IMO OpenCode is not better, just different. I’m getting great results with CC, but if I want to use other models like GLM/Qwen (or the new Nvidia stuff) it’s my tool of choice. I am really surprised to see people cancelling their Max subscriptions; it looks performative and I suspect many are not being honest.
I wonder if Opencode could use ACP protocol as well. ACP seems to be a good abstraction, I should probably learn more about it. Any TLDR's on how it works?
—-
1. ACP Servers Expect IDE-like Clients The ACP server interface in Claude Code is designed for: ∙ Receiving file context from an IDE ∙ Sending back edits, diagnostics, suggestions ∙ Managing a workspace-scoped session It’s not designed for another autonomous agent to connect and say “go solve this problem for me.”
2. No Delegation/Orchestration Semantics in ACP ACP (at least the current spec) handles: ∙ Code completions ∙ Chat interactions scoped to a workspace ∙ Tool invocations It doesn’t have primitives for: ∙ “Here’s a task, go figure it out autonomously” ∙ Spawning sub-agents ∙ Returning when a multi-step task completes
3. Session & Context Ownership Both tools assume they own the agentic loop. If OpenCode connects to Claude Code via ACP, who’s driving? You’d have two agents both trying to: ∙ Decide what tool to call next ∙ Maintain conversation state ∙ Handle user approval flows
Claude Code as a product doesn’t use their pay per call API, but they’ve never sold the Claude Code endpoint as a cheaper way to access their API without paying for the normal API
I've seen reports about this bug affecting Firefox users since Q3 2025. They were reported over various channels.
Not a fan of them prioritizing the combat against opencode instead of fixing issues that affect paying users.
I've found several reports about this issue. Seems they don't care about Firefox.
"The open source AI coding agent
Free models included or connect any model from any provider, including Claude, GPT, Gemini and more."
They gave Claude Code a discount to make it work as a product.
The API is priced for all general purpose usage.
They never sold the Claude Code endpoint as a cheaper general purpose API. The stories about “blocking OpenCode” are getting kind of out of hand because they’d block any use of the Claude Code endpoint that wasn’t coming from their Claude Code tool.
It also perhaps tries to preserve some moat around their product/service.
re the whole unused capacity is the nature of inference on GPUs. In any cluster, you can batch inputs (ie takes same time for say 1 query or 100 as they can be parallelized) and now continuous batching[1] exists. With API and bursty nature of requests, clusters would be at 40%-50% of peak API capacity. Makes sense to divert them to subscriptions. Reduces api costs in future, and gives anthropic a way to monetize unused capacity. But if everyone does it, then there is no unused capacity to manage and everyone loses.
[1]: https://huggingface.co/blog/continuous_batching
> it uses API's unused capacity
I see no waiting or scheduling on my usage - it runs, what appears to be, full speed till I hit my 4 hour / 7 day limit and then it stops.
Claude code is cheap (via a subscription) because it is burning piles of investor cash, while making a bit back on API / pay per token users.
With continuous batching, you don't wait for entire previous batch to finish. The request goes in as one finishes. Hence the wait time is negligible.
Anthropic blocks third-party use of Claude Code subscriptions
https://news.ycombinator.com/item?id=46549823
I don't like it too, but it is what it is.
If I gave free water refils if you used my brand XYZ water bottle, you should not cry that you don't get free refills to your ABC branded bottle.
It may be scummy, but it does make sense.
Edit: or should I say, the subscription is artificially cheap
Cry me a river - I never stop hearing how developers think their time is so valuable that no amount of AI use could possibly not be worth it. Yet suddenly, paying for what you use is "too expensive".
I'm getting sick of costs being distorted. It's resulting in dysfunctional methodologies where people are spinning up ridiculous number agents in the background, burning tokens to grind out solutions where a modicum of oversight or direction from a human would result in 10x less compute. At very least the costs should be realised by the people doing this.
Yeah, I noticed it. I use Claude, but I use it responsibly. I wonder how many "green" people run these instances in parallel. :D
The root cause is and remains their pricing: the delta between their token billing and their flat fee is just screaming to be exploited by a gray market.
I hope that why the following is purely a factual distinction, not an excuse or an attempt to empathize.
The difference between the other entities named and OpenCode is this:
OpenCode uses people’s Claude Code subscriptions. The other entities use the API.
Specifically, OpenCode reverse‑engineers Claude Code’s OAuth endpoints and API, then uses them. This is harmful from Anthropic's perspective because Claude Code is subsidized relative to the API.
Edit: I’m getting “You’re posting too fast” when replying to mr_mitm. For clarity, there is no separate API subscription. Anthropic wants you to use one of two funnels for coding with their LLMs: 1. The API (through any frontend), or 2. A subscription through an Anthropic‑owned frontend.
It's up to operating systems to offer a content consumption experience for end users which reverses the role of platforms back to their original, most basic offers. They all try to force you into their applications which are full of tracking, advertisements, upsells, and anti-consumer interface design decisions.
Ideally the operating system would untangle the content from these applications and allow the end user to consume the content in a way that they want. For example Youtube offers search, video and comments. The operating system should extract these three things and create a good UI around it, while discarding the rest. Playlists and viewing history can all be managed in the offline part of the application. Spotify offers music, search and lyrics but they want you to watch videos and use social media components in their very opinionated UIs, while actively fighting you to create local backup of your music library.
Software like adblockers, yt-dlp and streamlink are already solving parts of these issues by untangling content from providers for local consumption in a trusted environment. For me the fight by Anthropic against OpenCode fits into this picture.
These companies are acting hostile even towards paying customers, each of them trying to build their walled gardens.
That's why we are supposed to have legislation to regulate that utilities and common carriers can't behave that way.
Maybe try the style where you start off with your position in a self-contained sentence, and then write a paragraph elaborating on it.
It means that even though the cost depends on usage, you are billed at least a fixed minimum amount, regardless of how little water you actually use.