Hacker Newsnew | past | comments | ask | show | jobs | submit | dpflan's commentslogin

“””

Much of my career has been spent in teams at companies with products that are undergoing the transition from "hip app built by scrappy team" to "profitable, reliable software" and it is painful. Going from something where you have 5 people who know all the ins and outs and can fix serious bugs or ship features in a few days to something that has easy clean boundaries to scale to 100 engineers of a wide range of familiarities with the tech, the problem domain, skill levels, and opinions is just really hard. I am not convinced yet that AI will solve the problem, and I am also unsure it doesn't risk making it worse (at least in the short term)

“””

This perspective is crucial. Scale is the great equalizer / demoralizer, scale of the org and scale of the systems. Systems become complex quickly, and verifiability of correctness and function becomes harder. Companies that built from day with AI and have AI influencing them as they scale, where does complexity begin to run up against the limitations of AI and cause regression? Or if all goes well, amplification?


The more verifiable the domain the better suited. We see similar reports of benefits from advanced mathematics research from Terrence Tao, granted some reports seem to amount to very few knew some data existed that was relevant to the proof, but the LLM had it in its training corpus. Still, verifiably correct domains are well-suited.

So the concept formal verification is as relevant as ever, and when building interconnected programs the complexity rises and verifiability becomes more difficult.


> The more verifiable the domain the better suited.

Absolutely. It's also worth noting that in the case of Tao's work, the LLM was producing Lean and Python code.


I think the solution in harder-to-verify cases is to provide AI (sub-)agents a really good set of instructions on a detailed set of guidelines of what it should do and in what ways it should think and explore and break down problems. Potentially tens of thousands of words of instructions to get the LLM to act as a competent employee in the field. Then the models need to be good enough at instruction-following to actually explore the problem in the right way and apply basic intelligence to solve it. Basically treating the LLM as a competent general knowledge worker that is unfamiliar with the specific field, and giving it detailed instructions on how to succeed in this field.

For the easy-to-verify fields like coding, you can train "domain intuitions" directly to the LLM (and some of this training should generalize to other knowledge work abilities), but for other fields you would need to supply them in the prompt as the abilities cannot be trained into the LLM directly. This will need better models but might become doable in a few generations.


> I think the solution in harder-to-verify cases is to provide AI (sub-)agents a really good set of instructions on a detailed set of guidelines of what it should do and in what ways it should think and explore and break down problems

Using LLMs to validate LLMs isn't a solution to this problem. If the system can't self-verify then there's no signal to tell the LLM that it's wrong. The LLM is fundamentally unreliable, that's why you need a self-verifying system to guide and constrain the token generation.


Do you mind adding more color and details to your closing thought? I’m curious if you know if projects that exist to help with this.

I found this to be an interesting analysis:

“””

What has changed is where the durable value actually lives. It is increasingly useful to separate the stack into a few layers:

- The computing, IO, and compiler kernel libraries based on CUDA, compiler frameworks like MLIR or JAX’s XLA, and of course Apache Arrow.

-The database systems and caching layers, ideally connected with ADBC’s zero-serialization connnectivity.

- The language bindings and orchestration layers that expose those capabilities.

- The application or agent interfaces that sit on top.

When viewed this way, most of the long term value clearly resides in the first two layers (compute and data access), not the last two.

“””


"""

Claude Code with Opus 4.5 is a watershed moment, moving software creation from an artisanal, craftsman activity to a true industrial process.

It’s the Gutenberg press. The sewing machine. The photo camera.

"""

- Sergey Karayev

> https://x.com/sergeykarayev/status/2007899893483045321


Just pointing out here that "rue" is used to express "to regret", emphatically. Perhaps it is not the best name for a programming language.


That’s part of the reason for the name! “Rust” also has negative interpretations as well. A “rue” is also a kind of flower, and a “rust” is a kind of fungus.


Fair enough! I do like how others are framing this is as "write less code" -- if Rue makes one think more and more about the code that finally makes it to the production, that can be a real win.


Sounds fitting to me. Every line of code I wrote that ultimately didn't need code to begin with, is basically codified regrets checked into git.


The best code is the code not written, so perhaps it is the best name for a programming language?


With "servant leadership" in its current form being attributed to Greenleaf, here is the "source of truth" on servant leadership: https://greenleaf.org/what-is-servant-leadership/

"Growth" of those being led is a key concept it seems, which I would think is really only possible when the leader doesn't do everything by themselves as a die-hard servant, but utilizes the "leadership" part to help subordinates learn to lead themselves.

Granted this realm of ideas can be a gray-area, but it seems like servant leadership as presented by the author here does not incorporate the concept of growing those that they lead -- as indicated by the fact they have self-invented a new "buzzword" which actually seems to be involve the behaviors as laid out by servant leadership -- am I missing something?


Essentially, what information are they privy to that public is not? What asymmetry exists (timing, un-public information)? Is there any way for the public to be nearly as informed? What are they trading on? Upcoming funding changes (more money here -> buy, less money there --> sell)? COVID impact stands out.


Most "big" projects (huge chip foundries, etc.) require various forms of government approval (if not outright funding). They get asymmetry from knowing:

1. Sometimes that the project is happening before everyone else 2. If the project will or will not be approved or stopped e.g. in committee 3. Various other classified things like Dept of Defense briefings (if the Army says it needs XYZ component and plans to buy 10 billion worth of them, then buy the company that makes XYZ component).


It isn't necessarily that they have information the public doesn't (although it could be that, they would know about policy changes before the general public does). It could also be that they use their leadership position to push forward policies that benefit stocks that they own.


This could even happen for non-selfish reasons. If you genuinely believe that, e.g, "The future of America is BigTech," you'll both favor tech stocks in your portfolio and be more sympathetic to their lobbyists and favored bills.


There are 3 things always overlooked in this conversation.

- It’s not just representatives but their staff, family and friends. See the Covid scandals as an example.

- Often the information they’re privy to can come from their networks outside of government. The current transparency laws give us insights into their family members’ investments which is incredibly beneficial public knowledge.

- The current laws have no teeth and are not enforced well. Immediate disclosure should be required & automated for elected representatives, judges, family & staff within 24 hours. Late reporting should be immediate sale and loss of any profits.


If you compared Pelosi to other investors in San Francisco, you wouldn’t see much of a difference. Anyone who has gone heavy into tech in the last couple of decades has outperformed the market at considerable risk of going broke if there was a tech bust. Compare Facebook or Google to an index fund, especially before index funds decides to go heavy into FAANGs.

People who make/have more money also have more appetite for risk and also in general make more returns. Even without insider trading, being able to take a lopsided position on tech with the expectation that if it loses you still have a comfortable life, that is how the rich become richer.


That doesn't explain why persons in leadership positions outperform other members of congress. Presumably they all talk to each other and could share trading strategies. There's no reason not to unless your strategy involves inside information that might get you in trouble if spread around.


It actually does, if you believe that the people in leadership positions have been earning money for longer and have more experience in investing. You could also easily argue that they are more successful in general than congress people of similar tenure who aren't in leadership positions.

You are basically comparing the CEO to middle layer management, and then what do you expect? You need to do a more balanced comparison than that to show an actual discrepancy. Or maybe get congress to dole leadership positions out at random and then compare?


I’m not defending congressional trading, but there are potentially other confounding variables (emphasis on potential). Leaders may tend to be older, have more appetite for risk, or leadership may correlate with wealth/status because “the connected” can also raise more money etc etc. Unless those types of variables are controlled for, it should temper how strongly we draw conclusions.


Where is AI actually selling and doing well? What's a good resource for these numbers? What are the smaller scale use-cases where AI is selling well?

I am generally curious, because LLMs, VLMs, generative AI, advances are proving useful, but the societal impact scale and at this the desired rate is not revealing itself.


Coding - e.g. Claude Code, Cursor both announced 1B revenue run rates.


That would be meaningful if they weren’t losing money to generate that revenue.


The product works and saves enough human effort to justify the real cost. People will eventually pay when it comes down to it.


If that were the case, why not charge more?


Because they're loss-leading like all their competitors for now


I am running a container on an old 7700k with a 1080ti that gives me vscode completions with rag with similar latency and enough accuracy to be useful for boilerplate etc…

That is something I would possibly pay for but as the failures on complex tasks are so expensive, this seems to be a major use case and will just be a commodity.

Creating the scaffolding for a jwt token or other similar tasks will be a race to the bottom IMHO although valuable and tractable.

IMHO they are going to have to find ways to build a mote, and what these tools are really bad at is the problem domains that make your code valuable.

Basically anything that can be vibe coded can be trivially duplicated and the big companies will just kill off the small guys who are required to pay the bills.

Something like surveillance capitalism will need to be found to generate revenue needed for the scale of Microsoft etc…


Given how every CPU vendor seems to push for some kind of NPU, local running models will probably be far more common in next 5 years. And convincing everyone to pay subscription for very minimal improvements in functionality gonna be hard.


The NPUs integrated into CPU SoCs are very small compared to even integrated GPUs, much less discrete or datacenter GPUs.

NPUs seem to be targeted towards running tiny ML models at very low power, not running large AI models.


Have you documented your VSCode setup somewhere? I've been looking to implement something like that. Does your setup provide next edit suggestions too?


I keep idly wondering what would be the market for a plug and play LLM runner. Some toaster sized box with the capability to run exclusively offline/local. Plug it into your network, give your primary machine the IP, and away you go.

Of course, the market segment who would be most interested, probably has the expertise and funds to setup something with better horsepower than could be offered in a one size fits all solution.



Ooof, right idea but $4k is definitely more than I would be comfortable paying for a dedicated appliance.

Still, glad to see someone is making the product.


I am working on a larger project about containers and isolation stronger than current conventions but short kata etc…

But if you follow the podman instructions for cuda, the llama.cpp shows you how to use their plugin here

https://github.com/ggml-org/llama.vscode


Market size for this is in the billions though, not trillions.


it's easily a 200bn ARR business, if coding agent achieved another step jump in abilities ~ 1trn+ marketcap


> if coding agent achieved another step jump in abilities ~ 1trn+ marketcap

Do you want to walk us through that math?


Agreed, coding is one. What else?


Professional legal services seem to be picking up steam. Which sort of makes sense as a natural follow on to programming, given that 'the law' is basically codified natural language.


I don't know how it is in other countries, but in the UK using LLMs for any form of paid legal services is hugely forbidden, and would also be insanely embarrassing. Like, 'turns out nobody had any qualifications and they were sending all the work to mechanical Turks in third world countries, who they refused to pay' levels of embarrassing.

I say this as someone who once had the bright idea of sending deadline reminders, complete with full names of cases, to my smart watch. It worked great and made me much more organised until my managers had to have a little chat about data protection and confidentiality and 'sorry, what the hell were you thinking?'. I am no stranger to embarrassing attempts to jump the technological gun, or the wonders of automation in time saving.

But absolutely nobody in any professional legal context in the UK, that I can imagine, would use LLMs with any more gusto and pride than an industrial pack of diarrhoea relief pills or something - if you ever saw it in an office, you'd just hope it was for personal use and still feel a bit funny about shaking their hands.


Except that it keeps getting lawyers into trouble when they use it.

https://www.reuters.com/legal/government/judge-disqualifies-...


Yeah, good point. These things never get better.


Newsrooms, translation services.


sales, marketing, customer support, oh my, so many


I don't use it, but I know several people who use ChatGPT to edit emails etc. so they don't come across nasty. How well it works, I can't say.


Most of my family uses ChatGPT instead of Google to answer questions, despite my warnings that it’ll just make stuff up. I definitely Google much less now than I used to, directing a fair amount of that into ChatGPT instead.


But how much are you paying for these services?


My family? Same as they pay for Google


that's frankly mostly because google search got so massively worse... I'd still use google more if not for the fact the stuff I asked it 5 years ago and got answer no longer provides useful answers


you can check on trustmrr.com (mostly indie/solo businesses) that a large chunk of those smaller companies make money by selling AI video generation and other genAI services.


Is this being used internally at Google? What's the "dog-fooding" situation and is it leading to productivity enhancements?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: