Right, so Mistral accidentally released one internal prototype that was fine-tuned LLaMA. How does it follow from there that their other models are the same? Given that the weights are open, we can look, and nope, it's not the same. They don't even use the same vocabulary!
And I have no idea what you mean by "they just pick the dataset". The LLaMA training set is not publicly available - it's open weights, not open source (i.e. not reproducible).
And I have no idea what you mean by "they just pick the dataset". The LLaMA training set is not publicly available - it's open weights, not open source (i.e. not reproducible).