There’s a few instances of things I stated (about historical topics or very narrow topics in sociology) that were incorrect. LLMs scraped these off of web forums or other places, and now these bogus “facts” are permanently embedded into LLM models, because nobody else ever really talked about the specific topic.
Most amusingly, someone cited LLM generated output about this telling me how this “fact” is true when I was telling them it’s not true.
Most amusingly, someone cited LLM generated output about this telling me how this “fact” is true when I was telling them it’s not true.