But the randomness doesn't directly translate to a random outcome in results. It may randomly choose from a thousand possible choices, where 90% of the choices are some variant of 'the coin comes up heads'.
I think a more useful approach is to give the LLM access to an api that returns a random number, and let it ask for one during response formulation, when needed.
I think a more useful approach is to give the LLM access to an api that returns a random number, and let it ask for one during response formulation, when needed.