Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you don't understand the difference between a LLM and yourself, then you should talk to a therapist, not me.





At least LLMs attempt to answer the question. You just avoided it without any reasoning.

Because LLMs do no reason. They reply without a thought. Parent commenter, on the other hand, knows when to not engage a bullshit argument.

Arguing with “philosophers” like you is like arguing with religious nut jobs.

Repeat after me: 1) LLM do not reason

2) Human thought is infinitely more complex than any LLM algorithm

3) If I ever try to confuse both, I go outside and touch some grass (and talk to actual humans)


I agree with your point 2. I can't decide if I agree with your point 1 unless you can explain what "reason" means.

I found few definitions.

"Reason is the capacity of consciously applying logic by drawing valid conclusions from new or existing information, with the aim of seeking the truth." Wikipedia

This Wikipedia definition refers to The Routledge dictionary of philosophy which has a completely different definition: "Reason: A general faculty common to all or nearly all humans... this faculty has seemed to be of two sorts, a faculty of intuition by which one 'sees' truths or abstract things ('essences' or universals, etc.), and a faculty of reasoning, i.e. passing from premises to a conclusion (discursive reason). The verb 'reason' is confined to this latter sense, which is now anyway the commonest for the noun too" - The Routledge dictionary of philosophy, 2010

Google (from Oxford) provides simpler definitions: "Think, understand, and form judgements logically." "Find an answer to a problem by considering possible options."

Cambridge: Reason (verb): "to try to understand and to make judgments based on practical facts" Reasoning (noun): "the process of thinking about something in order to make a decision"

Wikipedia uses the word "consciously" without giving a reference and The Routledge talks about the reasoning as the human behavior. Other definitions point to an algorithmic or logical process that machines are capable of. The problematic concepts here are "Understanding" and "Judgement". It's still not clear if LLMs can really do these, or will be able to do in the future.


heres mine..

0) theory == symbolic representation of a world with associated rules for generating statements

1) understanding the why of anything == building a theory of it

2) intelligence == ability to build theories

3) reasoning == proving or disproving statements using a theory

4) math == theories of abstract worlds

5) science == theories of real world with associated real world actions to test statements

If you use this framework, LLMs are just doing a mimicry of reasoning (from their training set), and a lot of people are falling for that illusion - because, our everyday reasoning jives very well with what the LLM does.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: