Hacker Newsnew | past | comments | ask | show | jobs | submit | fleahunter's commentslogin

The hidden assumption here is that "learning programming" means replicating the author’s path: deep curiosity, lots of time, comfort asking humans, decent reading stamina. For people who already have those traits, yeah, you absolutely don’t need LLMs. But that’s a bit like a strong reader in 1995 saying "you don’t need Google to learn anything, the library is enough" - technically true, but it misses what changes when friction drops.

What LLMs do is collapse the activation energy. They don’t replace the hard work, they make it more likely you’ll start and keep going long enough for the hard work to kick in. The first 20 confusing hours are where most people bounce: you can’t even formulate a useful question for a human, you don’t know the right terms, and you feel dumb. A tool that will patiently respond to "uhh, why is this red squiggly under my thing" at 1am, 200 times in a row, is not a shortcut to mastery, it’s scaffolding to reach the point where genuine learning is even possible.

The "you won’t retain it if an LLM explains it" argument is about how people use the tool, not what the tool is. You also don’t retain it if you copy-paste Stack Overflow, or skim blog posts until something compiles. People have been doing that long before GPT. The deep understanding still comes from struggle, debugging, building mental models. An LLM can either be a summarization crutch or a Socratic tutor that keeps pushing you one step past where you are, depending on how you interact with it.

And "just talk to people" is good advice if you’re already inside the social graph of programmers, speak the language, and aren’t terrified of looking stupid. But the "nothing is sacred, everyone is eager to help" culture is unevenly distributed. For someone in the wrong geography, wrong time zone, wrong background, with no colleagues or meetups, LLMs are often the first non-judgmental contact with the field. Maybe after a few months of that, they’ll finally feel confident enough to show up in a Discord, or ask a maintainer a question.

There’s no royal road, agreed. But historically we’ve underestimated how much of the "road" was actually just gate friction: social anxiety, jargon, bad docs, hostile forums. LLMs don’t magically install kung-fu in your brain, but they do quietly remove a lot of that friction. For some people, that’s the difference between "never starts" and "actually learns the hard way."


"LLM as Socratic tutor" isn't quite right, because the LLM can't be trusted. But I have had great results with "LLM as debating partner". Basically, I try to explain the thing I'm learning and have the LLM critique me. Then I critique the LLM, because it usually says something that doesn't quite make sense (or I ask it cite its source and it recants its statement). A few rounds of this is (I think) really helpful for cementing my understanding.


Another assumption is that knowing how to use the tool doesn't require learning (and discipline). I tend to think that easily falling into using the crutch of AI is a significant issue itself, and that can be especially significant for people with ADHD.


Nobody needed Google in 1995 because Google didn't exist until 1998.


> The first 20 confusing hours are where most people bounce: you can’t even formulate a useful question for a human, you don’t know the right terms, and you feel dumb.

> argument is about how people use the tool, not what the tool is.

> The deep understanding still comes from struggle, debugging, building mental models. An LLM can either be a summarization crutch or a Socratic tutor that keeps pushing you one step past where you are, depending on how you interact with it.

> But historically we’ve underestimated how much of the "road" was actually just gate friction: social anxiety, jargon, bad docs, hostile forums.

Very well said.


Interesting point about the chains of YC startups. It really makes you think about how interconnected the startup ecosystem is. I wonder if there's an unspoken pressure to “keep it in the family,” so to speak, where founders might feel inclined to hire from their previous startups, or even lean on those networks when starting something new.

I've noticed trends where certain skills or experiences seem to bubble up in waves – like when a specific tech stack becomes popular and then a bunch of startups pop up around it. It’s almost like there’s a breeding ground effect happening.

And what about the concept of mentorship? Do you think these 'family trees' could lead to more structured mentorship, where founders from successful startups actively guide the next generation? Could be an interesting angle to explore!

I’m curious about Edge cases too. Like, what happens when a founder breaks away from the traditional path, either founding a startup without that chain or maybe even pivoting away from the typical YC model? It makes these genealogies feel both fascinating and a touch limiting. I love seeing these connections getting mapped out, but part of me wonders if we might miss some innovative outliers by focusing too much on these chains.


Yc founder, there's not really any pressure like this that exists. Basically i think it's a proximity thing and consistent exposure to a specific section of the startup world that makes this happen.

Basically all those people would go all the same networking events sponsored by yc. There's not pressure so much as these people all have frequent and paid for, opportunities to "hang out" and talk about tech stuff together. Its possible you could define this as "pressure" but I think it's more of who you hangout with rather than some top down implicit force.


Interesting point about the fluid dynamics! I’ve always been fascinated by how we can use math and physics to create visuals that feel so alive. A couple years back, I tried to replicate smoke effects in a game and ended up getting lost in fluid simulations—it’s wild how a few equations can lead to such organic results.

But I wonder if there's a way to combine both approaches? Like using smooth gradients for the base texture while applying some small-scale turbulent dynamics on top. It could add a nice touch of realism without going full-on simulation, which can get heavy on performance.

Also, have you found any tools or libraries that make working with these simulations more accessible? Sometimes it feels like there’s a barrier to entry with the math, but once you get it, the creative possibilities are endless!


Vector textures is how you fake it like you’re talking about. Just render the fft/liquid to a 3D texture where the RGBf is the normalized vector. Use this texture when you render the smoke billboard to add those vortices in 3D space by multiplying the uv by the vector in the vertex shader and shading like you do with the noise in the pixel shader.

Ooooo… we have smoke like it’s a 1930s billiards bar.

Here’s a few examples that use some fluid dynamics to make “smoke”

https://play.huwroberts.dev/advection/

https://www.shadertoy.com/view/DsKyWm

https://ghostinthecode.net/2016/08/17/fire.html

For fire, the import part is the flame, not the spark. It’s the same methods as above. Everything eventually lands on a 2D texture where you blur or blend it with the scene.

For smoke ribbons this works really well in screen space. For big plumes like mushroom clouds or physical clouds not so much. For those, volumetric SDF is the way to go with rayleigh marching and some gradient of grey.

Anyway, happy coding and enjoy playing with the rules of reality!


Using a human-rights sanctions framework against judges of a court literally created to prosecute human-rights violations is the snake eating its own tail. Sanctions used to be targeted at people trying to blow up the rule of law, now they are being used at people trying to apply it in ways that are politically inconvenient to a superpower and its allies.

This is why so many non-Western states call "rules-based order" a branding exercise: the same legal tool that hits warlords and cartel bosses is repurposed, with no structural checks, against judges whose decisions you dislike. And once you normalize that, you've handed every other great power a precedent: "our courts, our sanctions list, our enemies." The short-term message is "don't touch our friends"; the long-term message is "international law is just foreign policy with better stationery."


Interesting point about mocks being seen as a bad word. I've been in situations where relying solely on integration tests led to some really frustrating moments. It's like every time we thought we had everything covered, some edge case would pop up out of nowhere due to an API behaving differently than we expected. I remember one time we spent hours debugging a production incident, only to realize a mock that hadn’t been updated was the culprit—definitely felt like we'd fallen into that "mock drift" trap.

I've also started to appreciate the idea of contract tests more and more, especially as our system scales. It kind of feels like setting a solid foundation before building on top. I haven’t used Pact or anything similar yet, but it’s been on my mind.

I wonder if there’s a way to combine the benefits of mocks and contracts more seamlessly, maybe some hybrid approach where you can get the speed of mocks but with the assurance of contracts... What do you think?


Interesting point about removing branding! I've noticed that many early users really appreciate it when they can customize their experience, especially if they plan to present the tool to clients. I remember trying a few budget tools in the past that offered branding removal for a one-time fee, and it really made me feel like I had more ownership over my projects.

It could be a neat way to upsell, too—kind of a win-win where users can feel good about their investment. Plus, there's something appealing about making something feel more premium with just a small one-time buy. I wonder if adding a super low-cost option like that might draw in more users, even outside the indie maker community.

I’m curious—how big of a factor do you think branding really is for smaller teams? Would it really sway someone to choose your tool over another?


>It could be a neat way to upsell, too—kind of a win-win where users can feel good about their investment

Yes! I do this that this is an obvious upsell that you can cook and test in no time.

>I’m curious—how big of a factor do you think branding really is for smaller teams? Would it really sway someone to choose your tool over another?

I think that most smaller teams will be price sensitive, but you can always capture the ones that are willing to pay for the customization.

Edit: just noticed that I've replied to a LLM bot and not OP.


The most interesting bit here to me isn’t the $5 or the DIY, it’s that this is quietly the opposite of how we usually “do” sensing in 2025.

Most bioacoustics work now is: deploy a recorder, stream terabytes to the cloud, let a model find “whale = 0.93” segments, and then maybe a human listens to 3 curated clips in a slide deck. The goal is classification, not experience. The machines get the hours-long immersion that Roger Payne needed to even notice there was such a thing as a song, and humans get a CSV of detections.

A $5 hydrophone you built yourself flips that stack. You’re not going to run a transformer on it in real time, you’re going to plug it into a laptop or phone and just…listen. Long, boring, context-rich listening, exactly the thing the original discovery came from and that our current tooling optimizes away as “inefficient”.

If this stuff ever scales, I could imagine two very different futures: one is “citizen-science sensor network feeding central ML pipelines”, the other is “cheap instruments that make it normal to treat soundscapes as part of your lived environment”. The first is useful for papers. The second actually changes what people think the ocean is.

The $5 is important because it makes the second option plausible. You don’t form a relationship with a black-box $2,000 research hydrophone you’re scared to break. You do with something you built, dunked in a koi pond, and used to hear “fish kisses”. That’s the kind of interface that quietly rewires people’s intuitions about non-human worlds in a way no spectrogram ever will.


> You’re not going to run a transformer on it in real time

Why not? You can run BirdNET's model live in your browser[0]. Listen live and let the machine do the hard work of finding interesting bits[1] for later.

[0] https://birdnet-team.github.io/real-time-pwa/about/

[1] Including bits that you may have missed, obvs.


This is what I was going to say. My whole goal when setting up sensing projects is to eventually get it to a point that I can automate it. And I'm just a DIY dude in his house. I've been working on the detection of cars through vibrations detected by dual MPUs resonating through my house. I don't mean to imply I've had great success. I can see the pattern of an approaching car but I'm struggling to get it recognized as a car reliably and to not overcount.

But yeah, totally been doing projects like this for a long time lol not sure why OP implies you wouldn't do that. First thing I thought was "Oh man I want to put it in the lake near me and see if I can't get it detecting fish or something!"


> First thing I thought was "Oh man I want to put it in the lake near me and see if I can't get it detecting fish or something!"

Same. Although my first effort with my hydrophone (in my parents pond) was stymied because they live on a main road and all I picked up was car vibrations.

Maybe that's your solution - get a fish tank/pond and hydrophone!


Cheap sensors, used by many, is how we get more reproducability, more citizen science, and more understanding of the world around us.

RTL-SDR is another area where this there is so much to see 'hidden' in electromagnetic radio frequency space.


The Bermuda Triangle is basically what happens when three forces line up: the military's need to preserve reputation, the media's need for a compelling narrative, and the public's appetite for mystery over mundane failure.

Flight 19 is a perfect case study. You have: inexperienced trainees, a leader with possibly shaky navigation skills, bad weather, limited radio and radar, and institutional reluctance to write "we lost them because of human error and poor procedures" in big letters. So the official story ends up fuzzy enough that later writers can pour anything they want into the gaps: aliens, Atlantis, magnetic fields, whatever sells this decade.

What gets lost is that the boring explanation is actually more damning. It's not a spooky ocean triangle, it's that in 1945 you could take off from Florida in a military aircraft and, through a few compounding mistakes and system failures, simply never come back, with no way to reconstruct what really happened. The myth is comforting because it moves agency from fallible humans and flawed organizations to an impersonal "mysterious region" of the map.


>The myth is comforting because it moves agency from fallible humans and flawed organizations to an impersonal "mysterious region" of the map.

I think the myth is comforting simply because it was fun to believe and a lot more interesting than the banal truth. I don't think many actually believed it, other than children who mostly grow out of it by the time they learn that Santa is not real. Folklore, ghost stories, urban legends, etc, are fun and a part of who/what we (humans) are.


Back when I was a kid and paid any attention to the Bermuda Triangle myth (do kids still pay attention to it? I have no idea), we didn't have any idea about the details of Flight 19. It just got mushed into a vague "planes drop out of the sky". Because, I think, we didn't actually care about explaining anything. It was just fun to believe in spooky things, as you say.


It is documented[0] that at its peak around 35 000 people were taking horse de-wormer against a virus, not sure if that counts as many or not but there were for sure pretty serious believers.

[0] doi: 10.1007/s11606-021-06948-6


It looks to me that you're generating your comments entirely with LLMs? Lots of the general stylistic choices look very LLMish, especially looking over your history. A lot of "interesting point" repetitions too.

Plus this comment is basically a summary of the article, not giving anything new, very much what LLMs often give you.

It's interesting that no one commented on it before me, perhaps the HN crowd doesn't interact with LLMs enough :)


> The Bermuda Triangle is basically what happens when three forces line up: the military's need to preserve reputation, the media's need for a compelling narrative, and the public's appetite for mystery over mundane failure.

I’d argue that skeptics have the easiest job in the world. They just have to provide a plausible and well-regarded answer to a mystery without providing adequate evidence. Extraordinary claims require extraordinary evidence, but ordinary claims don’t require much evidence at all.


Not sure what exactly is "comforting" about people going missing and presumably dying at sea.


This is still a concern in 2025. If your aircraft systems break, or if you don't want to be identified, there are surprisingly few ways of identifying you nonetheless.

It surprises many people to learn that we do not have full radar coverage of the continental United States, much less the oceans. Outside of the ADIZ (Air Defense Identification Zone), military bases, large airports, etc., planes are more or less tracked voluntarily by systems like ADS-B.

From the excellent Computers Are Bad newsletter, https://computer.rip/2023-02-14-something-up-there-pt-I.html :

""" It is a common misconception that the FAA, NORAD, or someone has complete information on aircraft in the skies. In reality, this is far from true. Primary radar is inherently limited in range and sensitivity, and the JSS is a compromise aimed mostly at providing safety of commercial air routes and surveillance off the coasts. Air traffic control and air defense radar is blind to small aircraft in many areas and even large aircraft in some portions of the US and Canada, and that's without any consideration of low-radar-profile or "stealth" technology. With limited exceptions such as the Air Defense Identification Zones off the coasts and the Washington DC region, neither NORAD nor the FAA expect to be able to identify aircraft in the air. Aircraft operating under visual flight rules routinely do so without filing any type of flight plan, and air traffic controllers outside of airport approach areas ignore these radar contacts unless asked to do otherwise.

There are incidents and accidents, hints and allegations, that suggest that this concern is not merely theoretical. In late 2017, air traffic controllers tracked an object on radar in northern California and southern Oregon. Multiple commercial air crews, asked to keep an eye out, saw the object and described it as, well, an airplane. It was flying at a speed and altitude consistent with a jetliner and made no strange maneuvers. It was really all very ordinary except that no one had any idea who or what it was. The inability to identify this airplane spooked air traffic controllers who engaged the military. Eventually fighter jets were dispatched from Portland, but by the time they were in the air controllers had lost radar contact with the object. The fighter pilots made an effort to locate the object, but unsurprisingly considering the limited range of the target acquisition radar onboard fighters, they were unsuccessful. One interpretation of this event is that everyone involved was either crazy or mistaken. Perhaps it had been swamp gas all along. Another interpretation is that someone flew a good sized jet aircraft into, over, and out of the United States without being identified or intercepted. Reporting around the incident suggests that the military both took it seriously and does not want to talk about it. """


The part people underestimate is how much organizational discipline event sourcing quietly demands.

Technically, sure, you can bolt an append-only table on Postgres and call it a day. But the hard part is living with the consequences of “events are facts” when your product manager changes their mind, your domain model evolves, or a third team starts depending on your event stream as an integration API.

Events stop being an internal persistence detail and become a public contract. Now versioning, schema evolution, and “we’ll just rename this field” turn into distributed change management problems. Your infra is suddenly the easy bit compared to designing events that are stable, expressive, and not leaking implementation details.

And once people discover they can rebuild projections “any time”, they start treating projections as disposable, which works right up until you have a 500M event stream and a 6 hour replay window that makes every migration a scheduled outage.

Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows) and you’re willing to invest in modeling and ops. Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse.


> Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows)

Flip it on its head.

Would those domains be better off with simple crud? Did the accountants make a wrong turn when they switched from simple-balances to single-entry ledgers?


> or a third team starts depending on your event stream as an integration API.

> vents stop being an internal persistence detail and become a public contract.

You can't blame event sourcing for people not doing it correctly, though.

The events aren't a public contract and shouldn't be treated as such. Treating them that way will result in issues.

> Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse.

This is true, but all you're really saying it "Use the right tool for the right job".


> You can't blame event sourcing for people not doing it correctly, though.

You really can. If there's a technology or approach which the majority of people apply incorrectly that's a problem with that technology or approach.


No you can't.

You can blame the endless amount of people that jump in these threads with hot takes about technologies they neither understand or have experience with.

How many event sourced systems have you built? If the answer is 0, I'd have a real hard time understanding how you can even make that judgement.

In fact, half of this thread can't even be bothered to look up the definition of CQRS, so the idea that "Storing facts" is to blame for people abusing it is a bit of a stretch, no?


I've not run an event sourcing system in production myself.

This thread appears to have stories from several people who have though, and have credible criticisms:

https://news.ycombinator.com/item?id=45962656#46014546

https://news.ycombinator.com/item?id=45962656#46013851

https://news.ycombinator.com/item?id=45962656#46014050

What's your response to the common theme that event sourcing systems are difficult to maintain in the face of constantly changing product requirements?


I think having constantly changing product requirements would certainly make it difficult, but that makes all development more difficult.

In fact, I think most complexity I create or encounter is in response to trying to future-proof stuff I know will change.

I'm in healthcare. And it changes CONSTANTLY. Like, enormous, foundation changes yearly. But that doesn't mean there aren't portions of that domain that could benefit from event sourcing (and have long, established patterns like ADT feeds for instance).

One warning I often see supplied with event sourcing is not to base your entire system around it. Just the parts that make sense.

Blood pressure spiking, high temperature, weight loss, etc are all established concepts that could benefit from event sourcing. But that doesn't mean healthcare doesn't change or that it is a static field per se. There are certainly parts of my system that are CRUD and introducing event-sourcing would just make things complicated (like maintaining a list of pharmacies).

I think what's happening is that a lot of hype around the tech + people not understanding when to apply it is responsbile for what we're seeing, not that it's a bad pattern.


Thanks, this is a great comment. Love the observation that event sourcing only makes sense for parts of a system.

Could be that some of the bad experiences we hear about are from people applying it to fields like content management (I've been tempted to try it there) or applying it to whole systems rather than individual parts.


No problem and likewise. Conversations like this are great because they constantly make me re-evaluate what I think/say and often times I'll come out of it with a different opinion.

> Could be that some of the bad experiences we hear about are from people applying it to fields like content management (I've been tempted to try it there) or applying it to whole systems rather than individual parts

Amen. And I think what most people miss is that it's really hard to do for domains you're just learning about. And I don't blame people for feeling frustrated.


> What's your response to the common theme that event sourcing systems are difficult to maintain in the face of constantly changing product requirements?

I've been on an ES team at my current job, and switched to a CRUD monolith.

And to be blunt, the CRUD guys just don't know that they're wrong - not their opinion about ES - but that the data itself is wrong. Their system has evaluated 2+2=5, and with no way to see the 2s, what conclusion can they draw other than 5 is the correct state?

I have been slipping some ES back into the codebase. It's inefficient because it's stringy data in an SQL database, but I look forward to support tickets because i don't have to "debug". I just read the events, and have the evidence to back up that the customer is wrong and the system is right.


> It's inefficient because it's stringy data in an SQL database, but I look forward to support tickets because i don't have to "debug". I just read the events, and have the evidence to back up that the customer is wrong and the system is right.

I think one of the draws of ES is that it feels like the ultimate way to store stuff. The ability to pinpoint exact actions in time and then use that data to create different projections is super cool to me.


> You can't blame event sourcing for people not doing it correctly, though.

Perhaps not, but you can criticise articles like this that suggest that CQRS will solve many problems for you, without touching on _any_ of its difficulties or downsides, or the mistakes that many people end up making when implementing these systems.


CQRS is simply splitting your read and write models. That's it.

It's not complicated or complex.


This. This is also a reason why its so impressive google docs/sheets has managed to stay largely the same for so long


People keep treating this like "Trump vs comedians" culture war drama, but the interesting part is the FCC chair casually wandering into it like a party whip.

Once a regulator starts signaling, "We can do this the easy way or the hard way," every media company hears the real message: your license, your merger, your regulatory friction all depend on how much you annoy the people holding the pen. You don't even need explicit orders. A few public threats, a few well-timed approvals or delays, and suddenly "purely financial decisions" just happen to line up with political preferences.

This is soft censorship as a service: you outsource the actual silencing to risk-averse corporations who are already wired to overreact to anything that might jeopardize a multibillion dollar deal. The scary part isn't that a president wants a comedian fired, that's boringly normal. The scary part is when independent agencies stop pretending they're independent and start acting like they report to the comments section on Truth Social.


Americans thought that Russians would eventually adopt American culture, but instead Americans adopted Russian culture. Hehe.


Absolutely true, but I don't see what's funny about it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: