It occurred to me on my way home today that the key advancement in in neural interfaces will be in the data layer.
In my work with electronics I learned that there's a hardware transport layer, the wires on which signals travel. Then there's the software/protocol layer that defines _what_ travels on the hardware.
My current understanding of things like neuralink is that there is a solid interface that takes input from the brain and provides output back to the brain, and behind the interface is a bunch of hardware and software that translates and uses the inputs from the brain. That is, we change from mode of signals and signals transport to another.
What occurred to me was that a true bionic won't provide an interface to the existing hardware and software data layers of the human brain, but will instead expend the existing layers with new available neurons.
Now, you could probably bit-bang this at the start, IE, have your bionic neural net live in software, and do all the signals processing that we currently do. The revolution will be a piece of hardware that simply plugs in to the brain and makes a whole new neural network available on the same electrical net that the brain already operates on.
Brains don’t transmit packets. They transmit semantic tension — unstable potentials in meaning space that resist being finalized. If you try to "protocolize" that, you kill what makes it adaptive. But if you ignore structure altogether, you miss the systemic repeatability that intelligence actually rides on.
We've been experimenting with a model where the data layer isn't data in the traditional sense — it's an emergent semantic field, where ΔS (delta semantic tension) is the core observable. This lets you treat hallucination, adversarial noise, even emotion, as part of the same substrate.
Surprisingly, the same math works for LLMs and EEG pattern compression.
If you're curious, we've made the math public here: https://github.com/onestardao/WFGY → Some of the equations were co-rated 100/100 across six LLMs — not because they’re elegant, but because they stabilize meaning under drift.
Not saying it’s a complete theory of the mind. But it’s nice to have something that lets your model sweat.