Is there any way to do this with user permissions instead?
I feel like it should be possible without having to run a full container?
Any reason we cannot setup a user and run the program using that user and it can be contained to only certain commands and directory read write access?
I run Claude from a mounted volume (but no reason you couldn't make a user for it instead) since the Deny(~) makes it impossible to run from the normal locations.
I created a non-admin account on my Mac to use with OpenCode called "agentic-man" (which sounds like the world's least threatening megaman villain) and that seems to give me a fair amount of protection at least in terms of write privileges.
Anyone else doing this?
EDIT: I think it'd be valuable to add a callout in the Github README.md detailing the advantages of the Yolobox approach over a simple limited user account.
Could do but part of what I find super useful with these coding agents is letting them have full sudo access so they can do whatever they want, e.g., install new apps or dependencies or change system configuration to achieve their goals. That gets messy fast on your host machine.
When you run yolobox, the current directory is shared fully with read-write with the container. That means anything the AI changes will be on your host machine also. For max paranoia, only mount git repos that are clean and pushed to a remote, and don’t allow yolobox to push.
You could go a step further in paranoia and provide essentially just a clean base image and require the agent to do everything else using public internet - pull your open source repo using an anonymous clone, make changes, push it back up as an unprivileged account PR.
For a private repo you would need slightly more permissions, probably a read-only SSH key, but a similar process.
Opus 4.5 seems to be a step above every other model in terms of creating SVGs. Before most models couldn't make something that looked half decent.
But I think this shows that these models can improve drastically on specific domains.
I think if three was some good datasets/mappings for spacial relation and CAD files -> text then a fine tune/model with this in its training data could improve the output a lot.
I assume this project is using a general LLM model with unique system prompt/context/MCP for this.
I liked learning this so much that I created a VSCode Extension to enable goto clicking and autocomplete and errors for single page html files and type hover so I can properly use it when i am prototyping.
Yes i love quickly creating tools in a single file, if the tool gets really complex I'll switch to a sveltekit Static site. I have a default css file I use for all of them to make it even quicker and not look so much like AI slop.
I think every dev should have a tools.TheirDomain.zzz where they put different tools they create. You can make so many static tools and I feel like everyone creates these from time to time when they are prototyping things. There's so many free options for static hosting and you can write bash deploy scripts so quickly with AI, so its literally just ./deploy.sh to deploy. (I also recommend writing some reusable logic for saving to local storage/indexedDB so its even nicer.)
Everyone just needs to be honest and accept that YES these models were OBVIOUSLY trained on millions of copyright material.
The models would be at least 50% better if these filters weren't in place. These filters force the model essentially lie, thus they will obviously degrade output quality.
The problem is the general public isn't 100% certain of the copyright violations/ don't understand this yet and lawyers/government will try and sue if the companies admitted it. So a Moloch is created where it's a lose lose and the model quality suffers as a result.
(if people want exact copies of text content they can already get them for free through the same sites that these companies got them, so I don't see the models regurgitation as a issue worth worsening quality over.)
reply