I agree with the sentiment, but the problem with this question is that LLMs don't "know" *anything*, and they don't actually "know" how to answer a question like this.
It's just statistical text generation. There is *no actual knowledge*.
True, but I still think it could be done, within the LLM model.
It's just generating the next token for what's within the context window. There are various options with various probabilities. If none of the probabilities are above a threshold, say "I don't know", because there's nothing in the training data that tells you what to say there.
Is that good enough? "I don't know." I suspect the answer is, "No, but it's closer than what we're doing now."