Hacker Newsnew | past | comments | ask | show | jobs | submit | pcwelder's commentslogin

Great work, but concurrency is lost.

With search-replace you could work on separate part of a file independently with the LLM. Not to mention with each edit all lines below are shifted so you now need to provide LLM with the whole content.

Have you tested followup edits on the same files?


(not the author) it works fine most of the time been using it alongside an active agent and haven't ran into too many noticable problems. The token savings alone are worth it.

Serializing writes is probably fine and the hashes should only change if you're updating the same line, right?

You probably don't want to use the line number though unless you need to disambiguate

But your write tool implementation can take care of that


It's live on openrouter now.

In my personal benchmark it's bad. So far the benchmark has been a really good indicator of instruction following and agentic behaviour in general.

To those who are curious, the benchmark is just the ability of model to follow a custom tool calling format. I ask it to using coding tasks using chat.md [1] + mcps. And so far it's just not able to follow it at all.

[1] https://github.com/rusiaaman/chat.md


I love the idea of chat.md.

I'm developing a personal text editor with vim keybindings and paused work because I couldn't think of a good interface that felt right. This could be it.

I think I'll update my editor to do something like this but with intelligent "collapsing" of extra text to reduce visual noise.


Cool! Please share your work if possible!

I couldn't decide on folding and reducing noise so I'm stuck on that front. I believe there is some elegant solution that I'm missing, hope to see your take.


Custom tool calling formats are iffy in my experience. The models are all reinforcement learned to follow specific ones, so it’s always a battle and feels to me like using the tool wrong.

Have you had good results with the other frontier models?


Not the parent commenter, but in my testing, all recent Claudes (4.5 onward) and the Gemini 3 series have been pretty much flawless in custom tool call formats.

Thanks.

I’ve tested local models from Qwen, GLM, and Devstral families.


All anthropic models. Gemini 2.5 pro and above. Gemini 3 flash is very good too.

GPT models can follow tool format correctly but don't keep on going.

Grok-4+ are decent but with issues in longer chats.

Kimi 2.5 has issues with it reverting to its RL tool format.


Could also be the provider that is bad. Happens way too often on OpenRouter.

I had added z-ai in allow list explicitly and verified that it's the one being used.

Be careful with openrouter. They routinely host quantized versions of models via their listed providers and the models just suck because of that. Use the original providers only.

I specifically do not use the CN/SG based original provider simply because I don't want my personal data traveling across the pacific. I try to only stay on US providers. Openrouter shows you what the quantization of each provider is, so you can choose a domestic one that's FP8 if you want

Funny, living in Europe, I prefer using EU and Chinese hosts because as I don't want my data going to the US.

The trust in US firms and state is completely gone.


Tangent note: this sounds like the same mistake as EU's reliance on Russia.

Not really. China doesn't share a border with us, doesn't claim any EU territory, and didn't historically rule our lands the way the USSR did. In the context of spheres of influence and security interests, its strategic goals aren't directly at odds with the EU's core interests.

EU is not a singular country, and Germany or France don't border Russia either.

Considering China is ok to supply Russia, I don't see how your second point has any standing either.


> EU is not a singular country, and Germany or France don't border Russia either.

But soon they could, that's the problem.

> Considering China is ok to supply Russia, I don't see how your second point has any standing either.

Supply? China supplies Ukraine too. Ukraine's drone sector runs heavily on Chinese supply chains. And if China really wanted to supply Russia, the war would likely be over by now, Russia would have taken all of Ukraine.


Each repost is worth it.

This, along with John Ousterhout's talk [1] on deep interfaces was transformational for me. And this is coming from a guy who codes in python, so lots of transferable learnings.

[1] https://www.youtube.com/watch?v=bmSAYlu0NcY


> These are sending all files it can access

TBF, Cursor's code indexing works the same way, it has to send all workspace files to their servers.

Auto-completion systems need previous edits to suggest next edits so no surprises their either.


Sonnet has the same behavior: drops thinking on user message. Curiously in the latest Opus they have removed this behavior and all thinking tokens are preserved.


Displaying inferred types inline is a killer feature (inspired from rust lang server?). It was a pleasant surprise!

It's fast too as promised.

However, it doesn't work well with TypedDicts and that's a show-stopper for us. Hoping to see that support soon.


We should generally support TypeDicts. Can you go into more details of what is not working for you?


```

from anthropic.types import MessageParam

data: list[MessageParam] = [{"role": "user", "content": [{"type": "text", "text": ""}]}]

```

This for example works both in mypy and pyright. (Also autocompletion of typedict keys / literals from pylance is missing)


Thank you!

I reported this as https://github.com/astral-sh/ty/issues/1994

Support for auto-completing TypedDict keys is tracked here: https://github.com/astral-sh/ty/issues/86


To those who are not deterred and feel yolo mode is worth the risk, there are two patterns that should perk your ears up.

- Cleanup or deletion tasks. Be ready to hit ctrl c anytime. Led to disastrous nukes in two reddit threads.

- Errors impacting the whole repo, especially those that are difficult to solve. In such cases if it decides to reset and redo, it may remove sensitive paths as well.

It removed my repo once because "it had multiple problems and was better to it write from scratch".

- Any weird behavior, "this doesn't seem right", "looks like shell isn't working correctly" indicative of application bug. It might employ dangerous workarounds.


It just fetched the HTML and replicated it. The usage of table is a giveaway.

Any LLM with browser tool can do it (Kombai one shots it too for example), because it's just cheating.


haha wow - it also just straight up copied the .gif files byte for byte - same SHA sum


But that's cheating because it then has the source code containing the table and its styles.

I can confirm that this is what it does.

And if you ask it to not use tables, it cleverly uses div with the same layout as the table instead.


I think the idea is to let Claude see iterations of the reproduction with playwright, but still only allow access to screenshots of the original.


In RNNs and Transformers we obtain probability distribution of target variable directly and sample using methods like top-k or temprature sampling.

I don't see the equivalence to MCMC. It's not like we have a complex probability function that we are trying to sample from using a chain.

It's just logistic regression at each step.


Right, you're describing sampling a single token which is equivalent to sampling from one step in the Markov Chain. When generating output you're repeating this process and updating your state sequentially which is the definition of the Markov Chain since at each state the embedding (which represents our current state) is conditionally independent of the past.

Every response from an LLM is essentially the sampling of a Markov chain.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: