All science books and papers (pre-LLMs) were written by people. They got us to the moon and brought us the plane and the computer and many other things.
Many other things like war, animal cruelty, child abuse, wealth disparity, etc.. Hell, we are speed-running the destruction of the environment of the one and only planet we have. Humans are quite clever, though I fear we might be even more arrogant.
Regardless, my claim was not to argue that LLMs are more capable than people. My point was that I think there is a bit of a selection bias going on. Perhaps conjecture on my part, but I am inclined to believe that people are more keen to notice and make a big fuss over inaccuracies in LLMs, but are less likely to do so when humans are inaccurate.
Think about the everyday world we live in: how many human programmed bugs make it past reviews, tests, QA, and into production? How many doctors give the wrong diagnosis or make a mistake that harms or kills someone? How many lawyers give poor legal advice to clients?
Fallible humans expecting infallible results from their fallible creations is quite the expectation.
> Fallible humans expecting infallible results from their fallible creations is quite the expectation.
We built tools to accomplish things we cannot do well or at all. So we do expect quite a lot from them, even though we know they're not perfect. We have writings and books to help our memory and knowledge transfer. We have cars and planes to transport us faster than legs ever could... Any apparatus that doesn't help us do something better is aptly called a toy. A toy car can be faster than any human, but it's still a toy.