Hacker Newsnew | past | comments | ask | show | jobs | submit | randall's commentslogin

thanks for the PR! :)

once your team comes to a consensus on what PII is, you can roughly guarantee it... especially as models improve.

So two things.

1/ crypto signing is totally the right way to think about this. 2/ I'm limiting prompt injection by using chain of command: https://model-spec.openai.com/2025-12-18.html#chain_of_comma...

we have a "gambit_init" tool call that is synthetically injected into every call which has the context. Because it's the result of a tool call, it gets injected into layer 6 of the chain of command, so it's less likely to be subject to prompt injections.

Also, relatedly, yes i have thought EXTREMELY deeply about cryptographic primitives to replace HTTP with peer-to-peer webs of trust as the primary units of compute and information.

Imagine being able to authenticate the source of an image using "private blockchains" ala holepunch's hypercore.


Injecting context via tool outputs to hit Layer 6 is a clever way to leverage the model spec.

The gap I keep coming back to is that even at Layer 6, enforcement is probabilistic. You are still negotiating with the model's weights. "Less likely to fail" is great for reliability, but hard to sell on a security questionnaire.

Tenuo operates at the execution boundary. It checks after the model decides and before the tool runs. Even if the model gets tricked (or just hallucinates), the action fails if the cryptographic warrant doesn't allow that specific action.

Re: Hypercore/P2P, I actually see that as the identity layer we're missing. You need a decentralized root of trust (Provenance) to verify who signed the Warrant (Authorization). Tenuo handles the latter, but it needs something like Hypercore for the former.

Would be curious to see how Gambit's Deck pattern could integrate with warrant-based authorization. Since you already have typed inputs/outputs, mapping those to signed capabilities seems like a natural fit.


yaaaaa exactly. You're totally on the same wavelength as me. Let's be friends lol

i have a huge theory here that idk when we’ll implement but it has to do with “quorums” and other stuff.

hard to explain… we’ll keep going.


right now yeah we’re just dropping context… sub agents are short lived.

thinking about ways to deal with that but we haven’t yet done it.


yeah the way to do that stuff is through zod schemas… input and output schemas.

you can set up really complex validation.

thanks for checking it out!!


i have a slightly different but related take. the models actually are getting smarter, and now the challenge becomes successfully communicating intent with them instead of simply getting them to do anything remotely useful.

Gambit hopefully solves some of that, giving you a set of primitives and principles that make it simpler to communicate intent.


So I look at something like Mastra (or LangChain) as agent orchestration, where you do computing tasks to line up things for an LLM to execute against.

I look at Gambit as more of an "agent harness", meaning you're building agents that can decide what to do more than you're orchestrating pipelines.

Basically, if we're successful, you should be able to chain agents together to accomplish things extremely simply (using markdown). Mastra, as far as I'm aware, is focused on helping people use programming languages (typescript) to build pipelines and workflows.

So yes it's an alternative, but more like an alternative approach rather than a direct competitor if that makes sense.


thx, i appreciate it, believe it or not. :)

Thx! Happy to help if you need it. :)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: