How are you going to ship a tool you don't understand? What are you going to do when it breaks? How are you going to debug issues in a language you don't understand? How do you know the code the LLM generated is correct?
LLMs absolutely help me pick up new skills faster, but if you can't have a discussion about Rust and Svelte, no, you didn't learn them. I'm making a lot of progress learning deep learning and ChatGPT has been critical for me to do so. But I still have to read books, research papers, and my framework's documentation. And it's still taking a long time. If I hadn't read the books, I wouldn't know what question to ask or how to evaluate if ChatGPT is completely off base (which happens all the time).
I fully understand your point and even agree with it to an extent. LLMs are just another layer of abstraction, like C is an abstraction for asm is an abstraction for binary is an abstraction for transistors... we all stand on the shoulders of giants. We write code to accomplish a task, not the other way around.
I think friction is important to learning and expertise. LLMs are great tools if you view them as compression. I think calculators are a good example, people like to bring those up as a gotcha, but an alarming amount of people are now innumerate on basic receipt math or comprehending orders of magnitude.