Interestingly, modern ML does not require Turing completeness. And yet, people are considering the possibility of AGI - I would find it pretty amusing if Turing completeness isn't necessary.
Token inference by itself isn't Turing complete, but if its output can have side effects (e.g. editing the prompt for the next iteration), that's a whole different story.