Hacker Newsnew | past | comments | ask | show | jobs | submit | ipaddr's commentslogin

Not too many people have all three subscriptions. Most people have no subscriptions and will continue to use free ad tools with ads.

Claude will be the first company to fall as developers find something slightly better.


"pace at which the models are getting smart is accelerating". The pace is decelerating.

My impression is that solar (and maybe wind?) energy have benefited from learning-by-doing [1][2] that has resulted in lower costs and/or improved performance each year. It seems reasonable to me that a similar process will apply to AI (at least in the long run). The rate of learning could be seen as a "pace" of improvement. I'm curious, do you have a reference for the deceleration of pace that you refer to?

[1] https://emp.lbl.gov/news/new-study-refocuses-learning-curve

[2] https://ourworldindata.org/grapher/solar-pv-prices-vs-cumula...


Why would a the curve of solar prices be in any way correlated with the curve of AI improvements ?

The deceleration of pace is visible to anyone capable of using Google.


u/ipaddr is probably referring to

  1) the dearth of new (novel) training data. Hence the mad scramble to hoover up, buy, steal, any potentially plausible new sources.

  2) diminishing returns of embiggening compute clusters for training LLMs and size of their foundation models.
(As you know) You're referring to Wright's Law aka experience learning curve.

So there's a tension.

Some concerns that we're nearing the ceiling for training.

While the cost of applications using foundation models (implementing inference engines) is decreasing.

Someone smarter than me will have to provide the slopes of the (misc) learning curves.


I was not aware of (or had forgotten) the term "Wright's law" [1], but that indeed is what I was thinking of. It looks like some may use the term "learning curve" to refer to the same idea (efficiency gains that follow investment); the Wikipedia page on "Learning curve" [2] includes references to Wright.

[1] https://en.wikipedia.org/wiki/Experience_curve_effect

[2] https://en.wikipedia.org/wiki/Learning_curve


> It seems reasonable to me that a similar process will apply to AI

If its reasonable, then reason it. Because it is a highly apples to oranges comparison you are making


I don't think anyone really knows, because there's no objective standard for determining progress.

Lots of benchmarks exist where everyone agrees that higher scores are better, but there's no sense in which going from a score of 400 to 500 is the same progress as going from 600 to 700, or less, or more. They only really have directional validity.

I mean, the scores might correspond to real-world productivity rates in some specific domain, but that just begs the question -- productivity rates on a specific task are not intelligence.


Or the email address you have already hacked into. Why both with the username at that point.

Those are not great examples.

Bubble gum puts the buyer in a worse dental health situation.

Windows is a monopoly that controls the desktop market and the buyer would have been better off with a richer market with a variety of options.

You could have said cigarettes. They provide the same pleasure benefits as gum with unhealthy outcomes for the buyer.


You are making the mistake of asserting that the buyer exactly shares your values.

With Windows, I didn't assert that it was the value maximizing case, merely that it benefits both parties. Which it does, mostly likely to the advantage of the buyers.


By the same logic, an inmate in prison is benefiting from incarceration because they are receiving food and housing. Obviously, the costs of being imprisoned is greater than any "enrichment" from it, and this is exactly the case with Windows and almost all technology.

I would you my statement about relatively free markets addresses that.

Businesses that expand their licensing agreements with Microsoft aren't doing it because they are being coerced.


"Free" market coercion is probably the only reason companies like Microsoft are still in business.

LLM give false information often. The ability for you to catch incorrect facts is limited by your knowledge and ability and desire to do independent research.

LLMs are accurate with everything you don't know but are factually incorrect with things you are an expert in is a common comment for a reason.


As I used LLMs more and more for fact type queries, my realization is that while they give false information sometimes, individual humans also give false information sometimes, even purported subject matter experts. It just turns out that you don’t actually need perfectly true information most of the time to get through life.

No they don’t give false information often.

They do. To the point where I'm getting absolutely furious at work at the number of times shit's gotten fucked up and when I ask about how it went wrong the response starts with "ChatGPT said"

Do you double check every fact or are you relying on yourself being an expert on the topics you ask an llm? If you are an expert on a topic you probably aren't asking ab llm anyhow.

It reminds me of someone who reads a newspaper article about a topic they know and say its most incorrect but then reading the rest of the paper and accepting those articles as fact.


Gell-Mann Amnesia

"Often" is relative but they do give false information. Perhaps of greater concern is their confirmation bias.

That being said, I do agree with your general point. These tools are useful for exploring topics and answers, we just need to stay realistic about the current accuracy and bias (eager to agree).


I just asked chatGPT.

"do llms give wrong information often?"

"Yes. Large language models produce incorrect information at a non-trivial rate, and the rate is highly task-dependent."

But wait, it could be lying and they actually don't give false information often! But if that were the case, it would then verify they give false information at a non trivial rate because I don't ask it that much stuff.


I have them make up stuff constantly for smaller rust libraries that are newish or dont get a lot of use.

Lawsuits against medical professionals are difficult in many cases impossible for the average person to win. They are held less accountable compared to other professions.

> They are held less accountable compared to other professions.

I have no idea what other professions you’re talking about. Doctors are the only professionals where it’s common for multi million dollar judgements to be awarded against individuals. In may cases, judgements larger than their malpractice insurance limits.

Take a doctor working alone overnight in the ER. They are responsible for every single thing that happens. One of the 4 NPs that they are supposed to have time to supervise while they are stuck sedating a kid for ortho to work on makes a mistake—the doctor is the one that’s getting sued. A nurse misinterprets an order and gives too much of something, doctor is getting sued. Doesn’t matter if it’s their fault or not. Literally ever single one of the dozens of patients that comes in with a runny nose or a tummy ache, or a headache is their responsibility and could cost them their house. And there are far too many patients for them to actually supervise fully. They have to trust and delegate, but in practice they are still 100% on the hook for mistakes. For accepting this responsibility they might get $10 per NP patient that they supervise.

Healthcare professionals also occasionally face criminal prosecution for mistakes at a level that wouldn’t even be me a career in other professions.

> Lawsuits against medical professionals are difficult in many cases impossible for the average person to win

Malpractice attorneys operate on contingency, so they’re more accessible to the average person than most kinds of attorneys. It’s one of the many reasons healthcare is so expensive in the US.

It’s harder for a doctor to get fired for saying showing up late to work than it is for a cook at McDonald’s I guess, but compared to other professionals? I’ve seen software engineers regularly skip through companies leaving disasters in their wake for their entire careers. MBAs regularly destroy companies, lawyers and finance bros get away with murder, and police officers literally get away with murder.

The only profession that faces anywhere near the accountability that doctors do that I can think of might be civil engineers.


Input your roadmap into an llm of your choosing and see if you can create that code.

I can, but I switched to something more challenging. I handed over all things to him and told, Iam no more interested. I don't want him to feel that i cheated him by creating something he worked on.

If we just take this idea in good faith one could make the point that social media and books are more similiar than they appear. They both end up in escapism. They both can teach or entertain. They both are mostly anti-social.

The difference in form increases effectiveness but in the end they are a tool that is designed to escape reality.


>... in the end they are a tool that is designed to escape reality.

Non-fiction books would strongly beg to differ.


Interesting ideas. The bridge from what he built to what you suggest is far. The free outlook plugin sounds like a good onboarding vector but who downloads outlook plugins? Not the type of person in a medium/large company who is using 360 and/or has IT manage software like outlook.

You are on step 49. Being able to sell hostile image data requires a volume. The free plugin is local so is the plan to sell corporate data?

Investing in a software patent without the ability to enforce is a waste of money.

I think he might be better off making this into a wordpress plugin before going to outlook.


I got a headset and wanted to do vr apps but found the medium to addicting and now I just play with no desire to create.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: