I think with the Turing test, it's turned out to be a fuzzier line than expected. People are to various degrees learning LLM tells even as they improve. So what might have passed the Turing test in 2020 might not today. Similarly it seems to be a case that conversations with LLMs often start better than they end, even today - so an LLM might pass a short turing test but fail a very long one that goes into hundreds of rounds.
We’ve clearly passed the Turing test I think. I can’t think of many ways I’d be able to detect an LLM reliably, if it was coded to just act as a person talking to me on discord.
The Turing test isn't dead. The true Turing test is a thought experiment, and it's not something that can be replicated in the real world.
Given enough time and interaction, you can still spot a person on Discord being faked by an LLM—at the very least, something will feel off. This is even more true in a formal, knowing, adversarial setting.