Hacker News new | past | comments | ask | show | jobs | submit login

Just tried it (gemma3:12b) using ollama and also through open-webui

It's surprisingly fast and pretty good. Was really impressed that I can feed it images through open-webui

However, it keeps failing, both on the terminal and through open-webui. The error is:

"Error: an error was encountered while running the model: unexpected EOF"

It seems like it's an ollama issue, although according to tickets on GitHub it's supposed to be related to CUDA, but I'm running it on an M3 Mac

Up until now I never had this issue with ollama, I wonder if it's related to having updated to 0.6.0






Does Ollama use llama.cpp? If so you have to update that. You nearly always have to update the backend when a new model like this comes out.

I assure you it works fine with CUDA.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: