> I tell it the secret simple thing it’s missing and it gets it.
Anthropomorphizing LLMs is not helpful. It doesn't get anything, you just gave it new tokens, ones which are more closely correlated with the correct answer. It also generates responses similar to what a human would say in the same situation.
Note i first wrote "it also mimicks what a human would say", then I realized I am anthropomorphizing a statistical algorithm and had to correct myself. It's hard sometimes but language shapes how we think (which is ironically why LLMs are a thing at all) and using terms which better describe how it really works is important.
Given that LLMs are trained on humans, who don't respond well to being dehumanised, I expect anthropomorphising them to be better than the opposite of that.
Aside from just getting more useful responses back, I think it's just bad for your brain to treat something that acts like a person with disrespect. Becomes "it's just a chatbot", "It's just a dog", "It's just a low level customer support worker".
While I also agree with you on that, there are also prompts that make them not act like a person at all, and prompts can be write-once-use-many which lessens the impact of that.
This is why I tend to lead with the "quality of response" argument rather than the "user's own mind" argument.
I am not talking about getting it to generate useful output, treating it extra politely or threatening with fines seems to give better results sometimes so why not, I am talking about the phrase "gets it". It does not get anything.
It's a feature of language to describe things in those terms even if they aren't accurate.
>using terms which better describe how it really works is important
Sometimes, especially if you doing something where that matters, but abstracting those details away is also useful when trying to communicate clearly in other contexts.
Anthropomorphizing LLMs is not helpful. It doesn't get anything, you just gave it new tokens, ones which are more closely correlated with the correct answer. It also generates responses similar to what a human would say in the same situation.
Note i first wrote "it also mimicks what a human would say", then I realized I am anthropomorphizing a statistical algorithm and had to correct myself. It's hard sometimes but language shapes how we think (which is ironically why LLMs are a thing at all) and using terms which better describe how it really works is important.