Hacker News new | past | comments | ask | show | jobs | submit login

Doesn't yet work in LM Studio. Barfs an error when trying to load the model. (Error 6, whatever that means. Happy I missed the first 5.)





You need the newest llama.cpp and if you have an amd card and recently updated the drivers, roll them back. Most people complaining are using ROCm.

I assure you gemma 3 works fine in LM studio. Gguf and MLx are available.


> Barfs an error when trying to load the model

Since you're not using the official models (since they're not GGUFs), what exact model are you trying to use? The 3rd party you rely on might have screwed something up.


Please make sure to update to the latest llama.cpp version



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: