Obviously Anthropic are within their rights to do this, but I don’t think their moat is as big as they think it is. I’ve cancelled my max subscription and have gone over to ChatGPT pro, which is now explicitly supporting this use case.
Is opencode that much better than Codex / Claude Code for cli tooling that people are prepared forsake[1] Sonnet 4.5/Opus 4.5 and switch to GPT 5.2-codex ?
The moat is Sonnet/Opus not Claude Code it can never be a client side app.
Cost arbitrage like this is short lived, until the org changes pricing.
For example Anthropic could release say an ultra plan at $500-$1000 with these restrictions removed/relaxed that reflects the true cost of the consumption, or get cost of inference down enough that even at $200 it is profitable for them and they will stop caring if higher bracket does not sell well, Then $200 is what market is ready to pay, there will be a % of users who will use it more than the rest as is the case in any software.
Either way the only money here i.e. the $200(or more) is only going to Anthropic.
[1] Perceived or real there is huge gulf in how Sonnet 4.5 is seen versus GPT 5.2-codex .
The combination of Claude Code and models could be a moat of its own; they are able to use RL to make their agent better - tool descriptions, reasoning patterns, etc.
Are they doing it? No idea, it sounds ridiculously expensive; but they did buy Bun, maybe to facilitate integrating around CC. Cowork, as an example, uses CC almost as an infrastructure layer, and the Claude Agent SDK is basically LiteLLM for your Max subscription - also built on/wrapping the CC app. So who knows, the juice may be worth the RL squeeze if CC is going to be foundational to some enterprise strategy.
Also IMO OpenCode is not better, just different. I’m getting great results with CC, but if I want to use other models like GLM/Qwen (or the new Nvidia stuff) it’s my tool of choice. I am really surprised to see people cancelling their Max subscriptions; it looks performative and I suspect many are not being honest.
Why would they not be use RL to learn if its OpenCode instead of Claude Code?
The tool calls,reasoning etc are still sent, tracked and used by Anthropic, the model cannot function well without that kind of detail.
OpenCode also get more data if they to train their own model with, however at this point only few companies can attempt to do foundational model training runs so I don't think the likes of Anthropic is worried about a small player also getting their user data.
---
> it looks performative and I suspect many are not being honest.
Quite possible if they were leveraging the cost arbitrage i.e. the fact at the actual per token cost was cheaper because of this loophole. Now their cost is higher, they perhaps don't need/want/value the quality for the price paid, so will go to Kimi K2/ Grok Code/ GLM Air for better pricing, basically if all you value is cost per token this change is reason enough to switch.
These are kind of users Anthropic perhaps doesn't want. Somewhat akin to Apple segmenting and not focusing on the budget market.
I’ve used both Claude and Codex extensively, and I already preferred Codex the model. I didn’t like the harness, but recently pi got good enough to be my daily driver, and I’ve since found that it’s much better than either CC or Codex CLI. It’s OSS, very simple and hackable, and the extension system is really nice. I wouldn’t want to go back to Claude Code even if I were convinced the model were much better - given that I already preferred the alternative it’s a no-brainer. OpenAI have officially allowed the use of pi with their sub, so at least in the short term the risk of a rug pull seems minimal.
I hope the upcoming DeepSeek coding model puts a dent in Anthropic’s armor.
Claude 4.5 is by far the best/fastest coding model, but the company is just too slimy and burning enough $$$ to guarantee enshitification in the near future.
Honestly, I'm a big Claude Code fan, even despite how bad its CLI application is, because it was so much better than other models. Anthropic's move here pretty much signals to me that the model isn't much better than other models, and that other models are due for a second chance.
If their model was truly ahead of the game, they wouldn't lock down the subsidized API in the same week they ask for 5-year retention on my prompts and permission to use for training. Instead, they would have been focusing on delivering the model more cheaply and broadly, regardless of which client I use to access it.
Should be, yes - ACP is basically just a different way of invoking the agent, so you're still using Claude Code. It's alternative clients like OpenCode, the CharmBracelet one and pi which will be affected - they basically reimplement the agent part and just call the API directly.
This will piss a lot of people off, and seems like a strange move. I get that this was always a hack and against the ToS. But I've been paying Anthropic money every month to do exactly what I would have done with Claude Code, but in another harness that I like better. All they've achieved here is that I am no longer giving them money. Their per-token pricing is really expensive compared to OpenAI, and I like the results from the OpenAI models better too, they're just very slow.
Here's a good benchmark from the brokk team showing the performance per dollar, GPT-5.1 is around half the price of Opus 4.5 for the same performance, it just takes twice as long.
So as of today, my money is going to OpenAI instead of Anthropic. They probably don't care though, I suspect that not many users are sufficiently keen on alternative harnesses to make a difference to their finances. But by the same token (ha ha), why enforce this? I don't understand why it's so important to them that I'm using Claude Code instead of something else.
Presumably Claude Code is a loss leader to try to lock you into their ecosystem or at least get you to exclusive associate “AI” with “Claude”. So if it’s not achieving those goals, they’d prefer if you use OpenAI instead.
That's my understanding and that's what I see happening at some places.
People got a CC sub, invest on the whole tooling around CC (skills and whatnot) and once they're a few weeks/months in, they'll need a lot of convincing to even try something else.
And given how often CC itself changes and they need to keep up with it, that's even worse.
It's not just not wanting to get out of your confort zone, it's just trying to keep up with your current tools.
Now if you also have to try a new tool every other day, the 10x productivity improvements claimed won't be enough to cover the lack of actual working hours you'll be left with in a week.
I think most if not all of my CC customizations (skills, MCP config, CLAUDE.md) are quite easily portable to another agent. They are just text files. I may need to adjust one or two Claude specific things like thinking level instruction verbiage, but otherwise I don't see that as very sticky.
I'm curious, and in the spirit of a true MVP - does this need type, assignee, links or the parent relationship? Do you see the agent using them in practice? It seems like a minimal implementation could do without all of those, and I wonder if they're useful to the agent, or are just documentation for humans.
Since these seem more lightweight/ephemeral, it seems like it would be useful to search upwards for the nearest enclosing .tickets directory, so that subsystems could have separate issues.
Great call out. The agents will set those on `create` calls but don't typically reference them afterwards.
I have a fairly robust orchestration layer built on top of this tool that relies heavily on those fields though. But without that, they are a bit noisy.
Mixed feelings on upward search. One of my pain points with beads was that agents would sometimes create a bead outside of the correct directory and get dumped into a global `~/.beads/default.db` and make a mess. They've done that a couple times with ticket but run `tk ready` afterwards, see the new ticket is the only one, realize their mistake, and then relocate the ticket into the correct location. Still thinking on that one.
This is not the largest in the world but it is enormous, and is also amazing for having been built by a 15-year old to answer the question: "When I was 14 I asked my piano teacher how long a bass a string would need to be if it had no copper on it at all".
I love this follow-up quote: "I think because I was so young I absolutely knew it was totally possible to do, I was fully determined and without consulting any professionals I had no barrier stopping me."
There's also something quintessentially New Zealand about the whole story - making it in a mate's garage, and then moving the project to a farm tractor shed when it got too big and the inaugural concert that looks like it's still in the same shed, the photo of the tractor moving it for the outdoor concert...
Well done, when I was growing up we would always have some time in the evening when we would read a book out loud. When we were younger my parents would read and as we got older we would read sometimes too. I tried the same with my daughter but she stopped wanting to when she was around 10, but she’s in a better space now two years on so I’m going to try to resurrect the custom. It’s a really lovely thing to do as a family, but as the article suggests is quite strange these days and it can be difficult and require discipline to make the time.
I strongly encourage you to do that. The selection of the material is key: you have to find something that she is going to find fascinating but could not read on her own due to missing vocabulary / context / idiom-and-allusion cultural awareness. Then you get to try to fill that in with asides as you go along, if she'll tolerate it. Good luck!
Toad looks really nice, I will definitely try it out. I have some ACP questions if you don't mind.
First, from my reading of the ACP doc, one thing that seems pretty janky is if the ACP client wants to expose a tool to the agent, e.g. if Toad wanted to add the ability for the agent to display pretty diffs. In the doc they recommend stdio to the ACP server, then stdio to an MCP server, and then some out of band network request back to the ACP client. Have you thought about this, or found a better solution working on Toad?
Similarly, it would be useful to be able to expose a tool which runs a subagent using ACP using a different agent, e.g. if I'm using Claude for coding but I'd like to invoke codex for code review. Have you thought about doing anything like this? Is it feasible over the protocol?
I don’t follow your first question. Toad already displays pretty diffs. MCP works in the same way as the native CLI.
One of the advantages of Toad is that it is vendor agnostic. In the future Toad will be able to run sub agents, and allocate any agent to any job. Still to figure out the UX for that.
In my first question, I'm referring to exposing functionality from the ACP client to the agent. Imagine an IDE ACP client which wants to expose language refactoring to the agent, for example - I can't think of a better example for something more like Toad. As far as I know the protocol doesn't expose a way to inject tools into the agent from the ACP client.
The ACP protocol supports MCP. That would be how the client provides additional functionality for the agent. There's no UI in Toad for that yet, but there will be in a future update.
I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend.
This is generally true for code optimised by humans, at least for the sort of mechanical low level optimisations that LLMs are likely to be good at, as opposed to more conceptual optimisations like using better algorithms. So I suspect the same will be true for LLM-optimised code too.
reply