Hacker Newsnew | past | comments | ask | show | jobs | submit | iamwil's commentslogin

Doesn't make it excusable. I get it's hard to uphold principles when the stomach is empty. But it's clear the person in the piece wasn't thinking about much else, though he was also clearly not in the streets and starving.

Claude -> Clawd -> Moltbot -> Openclaw

Only a few things have claws. Lobsters being one of them.


Fair enough. Lobsters are cool.

I'd be interested in what kind of eSports game is condusive to VR spectating.

I tried doing Dota spectating before, and rigged up a mod for Minecraft vlogging/spectating, and concluded it wasn't quite like being at a stadium, or watching it on Twitch in a way that was interesting.


Do you mean VR "in the cockpit" or in the stadium? Flight simming has a robust VR community. I assume ultra technical car racing sims like iRacing are fun to spectate in VR. Geoguessr seems like a natural fit for VR as well, as long as you can avoid neck injuries from craning your head around.

I am convinced that there is an absurd amount of unrealized potential for spectating in eSports. But everyone seems to just deliver an experience that is more-or-less "like playing the game yourself, but worse, and with forced-hype commentary" rather than an actually engaging spectator product.

I write about reactivity, local first, visual programming, start ups, and a smidge about game design.

https://interjectedfuture.com


I have a hunch we'll eventually swing back when we find the limits of vibe coding--in that LLMs also can only hold so much complexity in their heads, even if it's an order of magnitude (or more) greater than ours. If we make it understandable for humans then it'll definitely be trivial for LLMs, which frees them up to do other things. I mean, they don't have infinite layers or units to capture concepts. So the more symmetrical, consistent, and fractal (composable) you can make your code, the easier time an LLM will have with it to solve problems.


LLM's context window limit already hits you in the nose when you have a big codebase and you ask it questions which make it read a lot of code. 200k is so easy to hit sometimes, especially when you only truly get to use 120k


LLMs have no heads.

No one has, to my knowledge, demonstrated a machine learning program with any understanding or complexity of behaviour exceeding that of a human.

LLMs don't have understanding.

Frees up who, the LLM or the human? Same question for "they".

What does symmetrical, fractal code look like in this context? How does this property assist the LLM's parser?


Of course they have no literal heads. Please use a more gracious interpretation when reading.


There's that "they" again.

If you're reading past the first sentence this time -- it is obvious, yes. So why use such language to describe the software? Your deliberate choice to use misleading language is not only obviously incorrect, but harmful.


The solution offered is pretty weak. I don't think it addresses why the internet took the shape that it did. Publishing without centralized services is too much work for people. And even if you publish, it's not the whole solution. People want distribution with their publication. Centralized services offer ease of publication and ease of distribution. So unless the decentralized internet can offer a better solution to both, this story will play out again and again.


> For instance, I know Project A -- these are the concerns of Project A. I know Project B -- these are the concerns of Project B. I have the insight to design these projects so they compose, so I don't have to keep track of a hundred parallel issues in a mono Project C. On each of those projects, run a single agent -- with review gates for 2-3 independent agents (fresh context, different models! Codex and Gemini). Use a loop, let the agents go back and forth.

Can you talk more about the structure of your workflow and how you evolved it to be that?


I've tried most of the agentic "let it rip" tools. Quickly I realized that GPT 5~ was significantly better at reasoning and more exhaustive than Claude Code (Opus, RL finetuned for Claude Code).

"What if Opus wrote the code, and GPT 5~ reviewed it?" I started evaluating this question, and started to get higher quality results and better control of complexity.

I could also trust this process to a greater degree than my previous process of trying to drive Opus, look at the code myself, try and drive Opus again, etc. Codex was catching bugs I would not catch with the same amount of time, including bugs in hard math, etc -- so I started having a great degree of trust in its reasoning capabilities.

I've codified this workflow into a plugin which I've started developing recently: https://github.com/evil-mind-evil-sword/idle

It's a Claude Code plugin -- it combines the "don't let Claude stop until condition" (Stop hook) with a few CLI tools to induce (what the article calls) review gates: Claude will work indefinitely until the reviewer is satisfied.

In this case, the reviewer is a fresh Opus subagent which can invoke and discuss with Codex and Gemini.

One perspective I have which relates to this article is that the thing one wants to optimize for is minimizing the error per unit of work. If you have a dynamic programming style orchestration pattern for agents, you want the thing that solves the small unit of work (a task) to have as low error as possible, or else I suspect the error compounds quickly with these stochastic systems.

I'm trying this stuff for fairly advanced work (in a PhD), so I'm dogfooding ideas (like the ones presented in this article) in complex settings. I think there is still a lot of room to learn here.


I'm sure we're just working with the same tools thinking through the same ideas. Just curious if you've seen my newsletter/channel @enterprisevibecode https://www.enterprisevibecode.com/p/let-it-rip

It's cool to see others thinking the same thing!


This sounds like a missing piece of software in the OSS world. If you have the inclination, you should write it.



Content marketing and lead generation is getting more sneaky


Ah OpenSCAD. I wrote a tutorial a long time ago about the 10 things you need to know to get started.

https://cubehero.com/2013/11/19/know-only-10-things-to-be-da...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: