OpenHands just announced a collaboration with AMD that lets developers run full coding agents locally on new Ryzen AI hardware — no cloud APIs, no data leaving the machine, and zero per-token cost.
The setup uses AMD’s open-source Lemonade stack + Ryzen AI Max series (CPU + GPU + NPU with 126 TOPS) to run models like Qwen3-Coder-30B directly on-device. You can point OpenHands to a local Lemonade endpoint and get full autonomous agent workflows running offline.
Why it’s interesting:
Local inference for real coding agents (not just autocomplete)
Privacy/compliance: IP never leaves your workstation
Cost: no usage-based billing
Performance: NPU/GPU optimized, low latency
Open source stack end-to-end
Given how fast local LLM tooling is evolving (Apple, NVIDIA, AMD, etc.), this feels like an inflection point: true autonomous dev-agents running locally, not in the cloud.
Curious to hear from others:
Who else is running agentic workloads entirely locally?
Is this the beginning of serious local-first dev tooling?
How big will “offline AI” get as hardware accelerates?
OpenHands raised $18.8M in a Series A led by Madrona to build the open, secure, and model-agnostic platform for cloud coding agents. OpenHands is collaborating with AMD to make local agents even better. OpenHands is delivering powerful coding agents for every developer, for free, forever.
Grafbase just launched Nexus, an open-source AI Router that unifies MCP servers and LLMs through a single endpoint. Designed for enterprise-grade governance, control, and observability, Nexus helps teams manage AI complexity, enforce policies, and monitor performance across their entire stack.
Built to work with any MCP server or LLM provider out-of-the-box, Nexus is designed for developers who want to integrate AI with the same rigor as production APIs.
Yeah they definitely belong in the same space. Nexus is an LLM Gateway, but early on, the focus has been on MCP: aggregation, authentication, and a smart approach to tool selection. There is that paper, and a lot of anecdotal evidence, pointing to LLMs not coping well with a selection of tools that is too large: https://arxiv.org/html/2411.09613v1
So Nexus takes a tool search based approach to solving that, among other cool things.
Disclaimer: I don't work on Nexus directly, but I do work at Grafbase.
Here are a few key differentiators vs LiteLLM today:
- Nexus does MCP server aggregation and LLM routing - LiteLLM only does LLM routing
- The Nexus router is a standalone binary that can run with minimal TOML configuration and optionally Redis - LiteLLM is a whole package with dashboard, database etc.
- Nexus is written in Rust - LiteLLM is written in Python
That said, LiteLLM is an impressive project, but we're just getting started with Nexus so stay tuned for a steady barrage of feature launches the coming months:)
The main difference is that while you can get Nexus to list all tools, by default the LLM accesses tools by semantic search — Nexus returns only the relevant tools for the what the LLM is trying to accomplish. Also, Nexus speaks MCP to the LLM, it doesn't translate like litellm_proxy seems to do (I wasn't familiar with it previously).
Hi! I'm one of the co-founders of DigitalOcean. Our plan is to open the Hatch program up to bootstrapped startups around the world.
We've launched the Hatch program in beta to pilot the program to a small group of startups over the next month. While in beta, we're going to be working to refine the offering and eligibility criteria for bootstrapped startups to apply.
For now, we've included a call to action on the Hatch landing page for those bootstrapped startups who are interested in joining the Hatch program once we open it up to a larger group of startups.
This is great news. Thank you for being responsive to the bootstrap community :).
How long do you expect the beta to last for and I couldn't see a CTA for bootstrapped startups. But I'd rather send an email for consideration to whoever is running the program / support when you guys have opened the program.
I would love to hear why as well! Our quality and stability have significantly improved over the last year and as a co-founder of DO, I would like to understand what specific issues you faced. Feel free to shoot me an email directly (mitch@digitalocean.com) to take this conversation offline.
I created an account and entered the promo code. Will the credit in my account expire? I'm asking because I'll only have time to play with DO in a couple of months.
The setup uses AMD’s open-source Lemonade stack + Ryzen AI Max series (CPU + GPU + NPU with 126 TOPS) to run models like Qwen3-Coder-30B directly on-device. You can point OpenHands to a local Lemonade endpoint and get full autonomous agent workflows running offline.
Why it’s interesting:
Local inference for real coding agents (not just autocomplete)
Privacy/compliance: IP never leaves your workstation
Cost: no usage-based billing
Performance: NPU/GPU optimized, low latency
Open source stack end-to-end
Given how fast local LLM tooling is evolving (Apple, NVIDIA, AMD, etc.), this feels like an inflection point: true autonomous dev-agents running locally, not in the cloud.
Curious to hear from others:
Who else is running agentic workloads entirely locally?
Is this the beginning of serious local-first dev tooling?
How big will “offline AI” get as hardware accelerates?