Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey we are actively working on improving support for Llama models. At the moment, CORE does not provide optimal results with Llama-based models, but we are making progress to ensure better compatibility and output in the near future.

Also we build core first internally for our main project SOL - AI personal assistant. Along the journey of building a better memory for our assistant we realised it's importance and are of the opinion that memory should not be vendor locked. It should be pluggable and belong to the user. Hence build it as a separate service.






I definitely would not recommend llama models, they were mostly outdated by the time they released, but the likes of Qwen, deepseek etc are much more useful.

Hey we started with llama but since llama was not giving good results hence fall backed to using gpt and launch it.

We will evaluate qwen and deepseek going forward, thanks for mentioning.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: