Hacker Newsnew | past | comments | ask | show | jobs | submit | shortrounddev2's commentslogin

I've tried removing my post because the comment section here has become a platform for AI enthusiasts to spread dangerous medical misinformation. As HN does not really care about user privacy, I am unable to actually delete it. I renamed the post to [Removed], but it appears the admins are uninterested in respecting the intent of this, and renamed the post back to its original title.

Moral of the story kids: don't post on HN


Nobody here is advocating blindly trusting medical advice from LLMs, that is not "dangerous medical misinformation".

Even if you absolutely despise LLMs, this is just silly. The problem here isn't "AI enthusiasts", you're getting called out for the absolute lack of nuance in your article.

Yes, people shouldn't do what you did. Yes, people will unfortunately continue doing what you did until they get better advice. But the correct nuanced advice in a HN context is not "never ask LLMs for medical advice", you will rightfully get flamed for that. The correct advice is "never trust medical advice from LLMs, it could be helpful or it could kill you".


You are unambiguously wrong. Never ask LLMs for medical advice. There is no nuance here, and I suspect the only reason there's so much backlash against this simple and obvious fact is because the amount of money in this scam of an industry


> Never ask LLMs for medical advice

If you're not going to trust it, why? What could possibly go wrong? At best you receive useful suggestions to take to a doctor, or guidance on which kind of specialist you should try to talk to. At worst you receive useless advice and maybe waste a bit of time


You deleted the original article already. HN isn't letting you delete other people's discussion. Frankly that's WAI.


Disclaimer: not a doctor (obviously), ask someone who is qualified, but this is what the ID doctor told me:

Lyme is a bacterial infection, and can be cured with antibiotics. Once the bacteria is gone, you no longer have Lyme disease.

However, there is a lot of misinformation about Lyme online. Some people think Lyme is a chronic, incurable disease, which they call "chronic lyme". Often, when a celebrity tells people they have lyme disease, this is what they mean. Chronic lyme is not a real thing - it is a diagnosis given to wealthy people by unqualified conmen or unscrupulous doctors in response to vague, hard to pin symptoms


To add a few things.

The late stage of lyme disease is painful. Like "I think I'm dying" painful. It does have a range of symptoms, but those show up like 3 to 6 weeks after the initial infection.

A lot of people claiming chronic lyme disease don't remember this stage.

Lyme disease does cause a range of problems if left untreated. But not before the "I think I'm dying" stage. It's basically impossible for someone, especially with a lot of wealth, to get lyme disease and not have it caught early on.

Consider the OP's story. They tried to not treat it but ended up thinking "OMG, I think I have meningitis and I'm going to die!".

Lyme can kill, but it rarely does. Partially because before it gets to that point it drives people to seek medical attention.


No, the lesson here is never use an LLM to diagnose you, full stop. See a real doctor. Do not make the same mistake as me


"Don't ask LLMs leading questions" is a perfectly valid lesson here too. If you're going to ask an LLM for a medical diagnosis, you should at the very least know how to use LLMs properly.

I'm certainly not suggesting that you should ask LLM for medical diagnoses, but still, someone who actually understands the tool they're using would likely not have ended up in your situation.


If you're going to ask an LLM for a medical diagnosis, stop what you're doing and ask a doctor instead. There is no good advice downstream of the decision to ask an LLM for a medical diagnosis


What about the multiple people who have reported receiving incredibly useful information after asking an LLM, when doctors were useless?

Should they not have done so?

Like this guy for example, was he being stupid? https://www.thesun.co.uk/health/37561550/teen-saves-life-cha...

Or this guy? https://www.reddit.com/r/ChatGPT/comments/1krzu6t/chatgpt_an...

Or this woman? https://news.ycombinator.com/item?id=43171639

This is a real thing that's happening every day. Doctors are not very good at recognizing rare conditions.


> What about the multiple people who have reported receiving incredibly useful information after asking an LLM, when doctors were useless?

They got lucky.

This is why I wrote this blog post. I'm sure some people got lucky when an LLM managed to give them the right answer, because they go and brag about it. How many people got the wrong answer? How many of them bragged about their bad decision? This is _selection bias_. I'm writing about my embarrassing lapse of judgment because I doubt anyone else will


That's an easy cop out!

AI saves lives, it's selection bias.

AI gives bad advice after being asked leading questions by a user who clearly doesn't know how to use AI correctly, AI is terrible and nobody should ask it about medical stuff.

Or perhaps there's a more reasonable middle ground? "It can be very useful to ask AI medical questions, but you should not rely on it exclusively."

I'm certainly not suggesting that your story isn't a useful example of what can go wrong, but I insist that the conclusions you've reached are in fact mistaken.

The difference between your story and the stories of the people whose lives were saved by AI is that they did generally not blindly trust what the AI told them. It's not necessary to trust AI to receive helpful information, it is basically necessary to trust AI in order to hurt yourself using it.


To be honest, I am pretty embarrassed about the whole thing, but I figured I'd post my story because of that. There are lots of people who misdiagnose themselves doing something stupid on the internet (or teenagers who kill themselves because they fell in love with some Waifu LLM), but you never hear about it because they either died or were too embarrassed to talk about it. Better to be transparent that I did something stupid so that hopefully someone else reads about it and doesn't do the same thing I did


> Not using ChatGPT in 2026 for medical issues and arming yourself with information [...] would be foolish in my opinion.

Using ChatGPT for medical issues is the single dumbest thing you can do with ChatGPT


True story. A friend was having abdominal pain. They were avoiding seeking medical help, thinking it would pass. The health service is overstretched in our country, and GP appointments can take time. I couldn’t help wondering if it was something more serious, like appendicitis. I discussed their symptoms with a LLM which said it sounds very much like appendicitis, that they should go to the ER immediately, and that they should bring a bag because they would likely be admitted and surgery would follow ASAP.

Guess what happened.


You can diagnose appendicitis using boring old internet searches. As I remember, loss of appetite and the pain moving over time are key.


Boring old internet searches wouldn’t have taken in the specifics I supplied.

And they mostly just come to a conclusion, rather than actionable advice I was able to pass on, like take a bag and avoid eating because they might want to operate soon.


They're much less likely to be hallucinated falsehoods though


Hey look, a reference website written by professionals has all the information you need, with no hallucinated falsehoods which are indistinguishable from the point of view of a non-specialist https://www.nhs.uk/conditions/appendicitis/

But it's 2025, so old fashioned.


Read to the bottom. I didn't specify the LLM because it doesn't matter. It's not the fault of the LLM, it's the fault of the user


It’s the fault of the tool!


I write a library which is used by customers to implement integrations with our platform. The #1 thing I think about is not

> How do I express this code in Typescript?

it's

> What is the best way to express this idea in a way that won't confuse or anger our users? Where in the library should I put this new idea? Upstream of X? Downstream of Y? How do I make it flexible so they can choose how to integrate this? Or maybe I don't want to make it flexible - maybe I want to force them to use this new format?

> Plus making sure that whatever changes I make are non-breaking, which means that if I update some function with new parameters, they need to be made optional, so now I need to remember, downstream, that this particular argument may or may not be `undefined` because I don't want to break implementations from customers who just upgraded the most recent minor or patch version

The majority of the problems I solve are philosophical, not linguistic


Nobody in the tech industry cares about DEI (most people I've met are downright hostile to the idea). All those companies in 2020 who hired DEI consultants and made big announcements about DEI and changed master to main were just buying cheap good will


You could argue that they were pandering to certain powers then and are pandering now.


Who was president when Microsoft introduced D&I (Diversity and Inclusion) initiatives in 2019? AFAIK, they never called it DEI internally.


I caution against extrapolating to the entire tech industry from an individual sample of experience.


Didn't Microsoft spend years basing the bonuses and performance reviews of middle management based heavily on the gender and race of who they managed to hire or promote?


The cynical take is that main was about shutting up Github employees protests against ICE contracts [1]. And it seemed to work: no more protests and another diversity consultant enriched.

[1] https://www.cnbc.com/2020/11/29/microsofts-github-has-become...


No; the burden of proof lies with the affirmative. To require someone to prove that they are NOT guilty of corruption is unfair (and also irrational)


index.html with script files would still benefit from a bundler. You can have a very minimal react footprint and still want to use react build tools just for bundling.


Sure, but I'm more confused about the next.js usage than I am about the bundler. The bundler makes sense.


What effect do you imagine Next.js has on a bunch of code manipulating an HTML canvas? For vanilla code directly using browser APIs it’s basically just a bundler configuration, and while it’s not optimally configured for that use case (and annoying for other reasons) it’s probably better than what someone who has never configured webpack before would get doing it themselves.


Well for one, it ships next.js and react.js bundled in with the code manipulating an HTML canvas.


Okay, but it’s a web game. Those will make up less than 0.1% of the downloaded bytes required to render the first frame of the game. One image asset will dwarf the entire gzip/brotli Next.js/React framework.


What is the use case for bundling next.js with the web game? Just the layout of the page surrounding the game canvas? It just seems unnecessary, that's all. Traditionally, software development in general and game development in particular has tried to avoid unnecessary overhead if it doesn't provide enough value to the finished product.

It's obvious why he didn't write the game in x86 assembly. It's also obvious why he didn't burn the game to CD-ROM and ship it to toy stores in big box format. Instead he developed it for the web, saving money and shortening the iteration time. The same question could be asked about next.js and especially about taking the time to develop Bun rather than just scrapping next.js for his game and going about his day. It's excellent for him that he did go this route of course, but in my opinion it was a strange path towards building this product.


Why would he stress about a theoretical inefficiency that has very little effect on the finished product or development process? Especially one that could be rectified in a weekend if needed? The game industry is usually pretty practical about what it focuses on from a performance perspective with good reason. They don’t build games like they’re demosceners min-maxing numbers for fun, and there’s a reason for that.

I also wonder how many people who sing the praises of an HTML file with a script tag hosted by Nginx or whatever have ever built a significant website that way. There’s a lot of frustrating things about the modern JS landscape, but I promise you the end results were not better nor was it easier back before bundlers and React.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: