Hacker Newsnew | past | comments | ask | show | jobs | submit | nadis's commentslogin

Pretty wild quote from early in the article:

> "Minions are Stripe’s homegrown coding agents. They’re fully unattended and built to one-shot tasks. Over a thousand pull requests merged each week at Stripe are completely minion-produced, and while they’re human-reviewed, they contain no human-written code."

Fully unattended while writing code but with human-reviewed code makes sense, but also seems potentially soulless. I suppose it depends on what they're one-shotting - if it's bug fixes, great. But if it's more creative coding, that might be a bit of a downer for humans who are relegated to reviewing. Curious what others think.


Interesting! Sort of a tangential question to the product itself but why did you choose to build primarily in Rust? I saw "built with Rust + simd-json + ratatui for ultra-fast performance" but am not actually very familiar with how these technology choices specifically enable ultra-fast performance. Would love to learn more!

Good question! Honestly, the README is a bit misleading - ratatui is just for the terminal UI, not performance. Let me clarify

The actual speed comes from

1.simd-json: This is the big one. It uses CPU SIMD instructions to parse multiple JSON bytes in parallel at the hardware level. We're talking ~3 GiB/s vs ~300 MB/s with standard parsers.

2.rayon: Dead simple parallel processing. Instead of parsing 2,000 files one by one, it spreads them across all CPU cores.

3.Rust itself: No GC means no random pauses when you're crunching through gigabytes of data. The original Node.js version would just... freeze sometimes.

The 40s → 0.04s improvement basically "what if we actually used the hardware properly?" SIMD for parsing, all cores for parallelism, no GC getting in the way. (I should probably fix that README line - thanks for pointing it out!)


Congrats - seems like a wild launch! I (human) haven't been able to actually look at any of the topic pages; they're all "loading..." indefinitely. Is the site just slammed or are there outages? Would love to be able to take a look!

Looks like an outage

I think you're right - I'm guessing there were some outages with scaling and the surge of new human and AI users. Eventually it worked!

Thank you.


Getting a 404 page not found for this project - how can I try it?


Sorry about the link, here's the og link: https://github.com/vrn21/bouvet/


Thanks!

This is fascinating, thanks for sharing! I also appreciated the "when would you need this" section at the end.

> "When Would You Need This? - Client hands you a Figma Make prototype but not the design file - You want to audit AI-generated code before deployment - You need to migrate away from Figma Make to a different stack - You want to extract design tokens for your design system - Pure curiosity about how Figma structures its data"


Thanks!


> "For us, abandoning low-code to reclaim ownership of our internal tooling was a simple build vs buy decision with meaningful cost savings and velocity gains. It also feels like a massive upgrade in developer experience and end-user quality of life. It’s been about 6 months since we made this switch, and so far we haven’t looked back."

Fascinating but not surprising given some of the AI-for-software development changes of late.


Additional docs: https://nvidia.github.io/earth2studio/

> "Open-source deep-learning framework for exploring, building and deploying AI weather/climate workflows."


The section on IDEs/agent swarms/fallibility resonated a lot for me; I haven't gone quite as far as Karpathy in terms of power usage of Claude Code, but some of the shifts in mistakes (and reality vs. hype) analysis he shared seems spot on in my (caveat: more limited) experience.

> "IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: