Obviously, they haven't figured out anything remotely sentient. It's cool as fuck, but it's not actually thinking. Thinking requires learning. You could show it a cat and it would still tell you it's a dog, no matter how many times you try and tell it.
Nothing about sentience is obvious. If the trees were sentient, would it be obvious? Is it therefore obvious that they're not? I think its a no in both cases. Same argument applies to AI model.
Sentience is at once too high a standard and too low a standard for AI.
It's too high in that it requires actual consciousness, which may be a very tough architectural problem at best (if functionalism is true) or an unknowable metaphysical mystery at worse (if some form of substance or property dualism is true).
And it's much too low a standard in that many, many sentient creatures are nowhere near intelligent enough to be useful assistants in the domains where we want to use AI.