Hacker News new | past | comments | ask | show | jobs | submit login

You need the newest llama.cpp and if you have an amd card and recently updated the drivers, roll them back. Most people complaining are using ROCm.

I assure you gemma 3 works fine in LM studio. Gguf and MLx are available.






Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: