Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rewind relies GPT-4 for the useful parts. I assume Rem will support local LLMs?

https://help.rewind.ai/en/articles/7791703-ask-rewind-s-priv...



That's the plan. Very open to ideas on the best way to do it. Seems like either Stdin/Stdout or API call via localhost.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: