A structured language without ambiguity is not, in general, how people think or express themselves. In order for a model to be good at interfacing with humans, it needs to adapt to our quirks.
Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.
Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc
>Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.
I think there's a substantial subset of tech companies and honestly tech people who disagree. Not openly, but in the sense of 'the purpose of a system is what it does'.
I agree but it feels like a type-of-mind thing. Some people gravitate toward clean determinism but others toward chaotic and messy. The former requires meticulous linear thinking and the latter uses the brain’s Bayesian inference.
Writing code is very much “you get what you write” but AI is like “maintain a probabilistic mental model of the possible output”. My brain honestly prefers the latter (in general) but I feel a lot of engineers I’ve met seem to stray towards clean determinism.
Yep, humans have had a remedy for the problem of ambiguity in language for tens of thousands of years, or there never could have been an agricultural revolution giving birth to civilization in the first place.
Effective collaboration relies on iterating over clarifications until ambiguity is acceptably resolved.
Rather than spending orders of magnitude more effort moving forward with bad assumptions from insufficient communication and starting over from scratch every time you encounter the results of each misunderstanding.
Most AI models still seem deep into the wrong end of that spectrum.
I think such influence will be extremely minimal, like confined to dozens of new nouns and verbs, but no real change in grammar, etc.
Interactions between humans and computers in natural language for your average person is much much less then the interactions between that same person and their dog. Humans also speak in natural language to their dogs, they simplify their speech, use extreme intonation and emphasis, in a way we never do with each other. Yet, despite having been with dogs for 10,000+ years, it has not significantly affected our language (other then giving us new words).
EDIT: just found out HN annoyingly transforms U+202F (NARROW NO-BREAK SPACE), the ISO 80000-1 preferred way to type thousand separator
> I think such influence will be extremely minimal.
AI will accelerate “natural” change in language like anything else.
And as AI changes our environment (mentally, socially, and inevitably physically) we will change and change our language.
But what will be interesting is the rise of agent to agent communication via human languages. As that kind of communication shows up in training sets, there will be a powerful eigenvector of change we can’t predict. Other than that it’s the path of efficient communication for them, and we are likely to pick up on those changes as from any other source of change.
Seems very unlikely. My parent said the effects have already started (but provided no evidence), so I assume you mean by less then a generation. I am not a linguist but I would like to see evidence of such rapid shifts ever occurring in anywhere the history of languages before I believe either of you.
I have a feeling you only have a feeling, but not any credible mechanism in which such language shifts can occur.
I am a little confused. Every year language changes. Young people, tech, adapting ideologies, words and concepts adopted from other languages, the list of language catalysts is long and pervasive.
> Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.
I'm on the spectrum and I definitely prefer structured interaction with various computer systems to messy human interaction :) There are people not on the spectrum who are able to understand my way of thinking (and vice versa) and we get along perfectly well.
Every human has their own quirks and the capacity to learn how to interact with others. AI is just another entity that stresses this capacity.
Speak for yourself. I feel comfortable expressing myself in code or pseudo code and it’s my preferred way to prompt an LLM or write my .md files. And it works very effectively.
> Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc
I wish I understood how that works. Retail investors are so small, compared with hedge funds and whatnot, that "average people" cannot move a stock price significantly. So, when Trump tweets about a company, how does the stock move? Who is actually doing all that selling to drive the price down?
And, since the price almost always recovers within a week... does it even matter?
Trump has big money friends that control non-retail investment. The tweet is just signalling.
That kind of access and "control" is why they think they can just tweet at Coke to stop using artificial dyes instead of, you know, changing the rules at the organization they run.
Classic false dilemma. You're trying to frame my comment as “we can only ever fix one problem” when it is, in fact, “we have constrained resources and urgent systemic failures and so prioritisation is important”.
For example, Budget 2026 did not address the €307 million structural shortfall in university funding. Is basic income for artists a better allocation than third-level education? Or capital expenditure on cancer care? Or NAS opex?
I specifically disagree with this allocation of funds as we live in country filled with specific solvable structural and life-limiting problems that should be solved before artist wellbeing.
That's beside the point? Gaining security by losing freedom was always on the table. What's interesting is the cultural shift toward not caring about losing freedom.
I think it is the point: there is a balance between freedom and safety.
For example, it is illegal to carry a loaded handgun onto a plane. Most people would agree that is an acceptable trade of freedom for safety.
There are places with even less safety and more “freedom” than the US so people who take an absolutist view towards freedom also need to justify why the freedoms that the US does not grant are not valuable.
> I think it is the point: there is a balance between freedom and safety.
Sometimes. But freedom and security are not always opposed.
It’s possible to trade freedom for security but it’s also possible that freedom creates security. Both can be true at the same time. Surveillance, not security, is what opposes freedom. Surveillance simply trades one form of insecurity for another at the cost of freedom.
> For example, it is illegal to carry a loaded handgun onto a plane. Most people would agree that is an acceptable trade of freedom for safety.
A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.
2A seems to make the case that the freedom to bear arms creates security. Given how history played out it’s hard to argue against. I’m not arguing we should be able to take guns on planes but 2A is an example of freedom creating security.
How about just general privacy? I mean do you really want someone / the government to be able to track everywhere you go?
- Going to your girlfriends place while the wife is at work
- Visiting a naughty shop
- Going into various companies for interviews while employed
With mass surveillance there is the risk of mass data leak. Would you be comfortable with a camera following you around at all times when you're in public? I wouldn't be.
It's a crime to leave the state to get an abortion. They can prosecute when you return home.
There have been vigilante patrols in West Texas, watching the necessary routes out of the state. The law gives any resident the grounds to turn in their neighbor for planning to get an abortion.
The right to privacy, to not let the government have a master record of everywhere you've ever been and everything you've ever said just in case they decide to someday revoke free speech and due process, or decide it doesn't apply. Lately we have plenty of examples of how quickly that can happen.
The Stasi were "tough on crime" too, back when that was expensive. How quickly we forget. Well, you're welcome to find a panopticon to live it, but excuse other for not finding it a good tradeoff.
We're looking for a change in slope based on overarching economic policy as a means of comparing two political parties. A global disaster is not a policy.
So find the chart and look at the trend, and add annotations for administration start and end dates with consideration for lag, and then annotate the chart to indicate the partisan balance of Congress at that time.
When you cut revenue and increase expenses and the real problem here is disease in the world
I hope so. We're right on the cusp of having computers that actually are everything we ever wanted them to be, ever since scifi started describing devices that could do things for us. There's just a few pesky details left to iron out (who pays for it, insane power demand, opaque models, non-existent security, etc etc).
Things actually can "do what I mean, not what I say", now. Truly fascinating to see develop.
Ah yes. “Non-existent security” is only a pesky detail that will surely be ironed out.
It’s not a critical flaw in the entirety of the LLM ecosystem that now the computers themselves can be tricked into doing things by asking in just the right way. Anything in the context might be a prompt injection attack, and there isn’t really any reliable solution to that but let’s hook everything up to it, and also give it the tools to do anything and everything.
There is still a long way to go to securing these. Apple is, I think wisely, staying out of this arena until it’s solved, or at least less of a complete mess.
Yes, there are some flaws. The first airplanes also had some flaws, and crashed more often than they didn't. That doesn't change how incredible it is, while it's improving.
Maybe, just maybe, this thing that was, until recently, just research papers, is not actually a finished product right now? Incredibly hot take, I know.
I think the airplane analogy is apt because commercial air travel basically capped out at "good enough" in terms of performance (just below Mach 1) a long time ago and focused on cost. Everyone assumes AI is going to keep getting better, but what if we're nearing the performance ceiling of LLMs and the rest is just cost optimization?
Principally speaking, as much energy as satellite receives from solar panels it needs to send away - and often a lot of it is in the form of heat. So, the question is, how much energy is received in the first place. We currently have some quarter of megawatt of solar panels of ISS, so in principal - in principal - we know how to do this kind of scale per satellite. In practice we perhaps will have more smaller satellites which together aggregate the compute to the necessary lever and power to the corresponding level.
> We currently have some quarter of megawatt of solar panels of ISS
It's average outbut is like half of that though. So something the size of the space station, a massive thing which is largely solar panels and radiators, can do like 120kW sustained. Like 1-2 racks of GPUs, assuming you used the entire power budget on GPUs.
And we're going to build and launch millions of these.
Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.
Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc
reply