Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> An LLM doesn't know more than what's in the training data.

Post-training for an LLM isn't "data" anymore, it's also verifier programs, so it can in fact be more correct than the data. As long as search finds LLM weights that produce more verifiably correct answers.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: