Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

yes, agreed. Context length might be playing a factor as total number of prompt tokens is >120k. Performance of LLMs generally degrade at higher context length.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: