Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems clear to me that LLMs are a useful sort of dumb smart activity. They can take some pretty useful stabs in the dark, and therefore do much better in an environment which can give them feedback (coding) or where there is no objective correct answer (write a poem). It opens the door for some novel type of computational tasks, and the more feedback you can provide within the architecture of your application, the more useful the LLM will probably be. I think the hype of their genuine intelligence is overblown, but doesn’t mean they are not useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: