Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

i think they're especially likely to hallucinate when asked to cite sources, as in they're mostly prone to making up sources, and a lot of the work my lawyer friend have asked of chatgpt or claude requires it to cite stuff, and my friend has said it has just made up case law that isn't real. so while it's useful as a launching point and can in fact be helpful and find real case law, you still have to double check every single thing it says with a fine tooth comb, so its productivity impact is much lower than code where you can clearly see whether the output works immediately



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: