You're mistaking pattern matching and the modeling of relationships in latent space for genuine reasoning.
I don't know what you're working on, but while I'm not curing cancer, I am solving problems that aren't in the training data and can't be found on Google. Just a few days ago, Gemini 2.5 Pro literally told me it didn’t know what to do and asked me for help. The other models hallucinated incorrect answers. I solved the problem in 15 minutes.
If you're working on yet another CRUD app, and you've never implemented transformers yourself or understood how they work internally, then I understand why LLMs might seem like magic to you.
None of your current points actually support your position.
1. No, it doesn't. That's a ridiculous claim. Are you seriously suggesting that statistics require reasoning?
2. If you map that language to tokens, it's obvious the model will follow that mapping.
etc.
Here are papers showing that these models can't reason:
https://arxiv.org/abs/2311.00871
https://arxiv.org/abs/2309.13638
https://arxiv.org/abs/2311.09247
https://arxiv.org/abs/2305.18654
https://arxiv.org/abs/2309.01809
You're mistaking pattern matching and the modeling of relationships in latent space for genuine reasoning.
I don't know what you're working on, but while I'm not curing cancer, I am solving problems that aren't in the training data and can't be found on Google. Just a few days ago, Gemini 2.5 Pro literally told me it didn’t know what to do and asked me for help. The other models hallucinated incorrect answers. I solved the problem in 15 minutes.
If you're working on yet another CRUD app, and you've never implemented transformers yourself or understood how they work internally, then I understand why LLMs might seem like magic to you.