Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the early 2000s Wikipedia used to fill that role. Now it's like you have an encyclopedia that you can talk to.

What I'm slightly worried about is that eventually they are going to want to monetize LLMs more and more, and it's not going to be good, because they have the ability to steer the conversation towards trying to get you to buy stuff.



> they are going to want to monetize LLMs more and more

Not only can you run reasonably intelligent models on recent relatively powerful PC's "for free", but advances are undoubtedly coming that will increase the efficient use of memory and CPU in these things- this is all still early-days

Also, some of those models are "uncensored"


Can you? I imagine e.g. Google is using material not available to the public to train their models (unsencored Google books, etc.). Also, the chat bots, like Gemini, are not just pure LLMs anymore, but they also utilize other tools as part of their computation. I've asked Gemini computationally heavy questions and it successfully invokes Python scripts to answer them. I imagine it can also use other tools than Python, some of which might not even be publicly known.

I'm not sure what the situation is currently, but I can easily see private data and private resources leading to much better AI tools, which can not be matched by open source solutions.


While they will always have premiere models that only run on data center hardware at first, the good news about the tooling is that tool calls are computationally very minimal and no problem to sandbox/run locally, at least in theory, we would still need to do the plumbing for it.

So I agree that open source solutions will likely lag behind, but that's fine. Gemini 2.5 wasn't unusable when Gemini 3 didn't exist, etc.


Yes, because local models can run Internet search tools. Even the big boys like openai etc I prefer the results quality when it's made a search - and they seem to have realised this too, the majority of my queries now kick off searches.


How do you verify the models you download also aren't trying to get you to buy stuff?


I guess you.. ask them a bunch of recommendations? I would imagine this would not be incredibly hard to test as a community


Before November 30, 2022 that would have worked, but I think it stopped being reliable sometime between the original ChatGPT and today.

As per dead internet theory, how confident are we that the community which tells us which LLM is safe or unsafe is itself made of real people, and not mostly astroturfing by the owners of LLMs which are biased to promote things for money?

Even DIY testing isn't necessarily enough, deceptive alignment has been shown to be possible as a proof-of-concept for research purposes, and one example of this is date-based: show "good" behaviour before some date, perform some other behaviour after that date.


Proudly bought to you by Slurm


Which free models do you recommend?


One of the approaches to this that I haven't seen being talked about on HN at all is LLM as public infrastructure by the government. I think EU can pull this off. This also addresses overall alignment and compute-poverty issue. I wouldn't mind if my taxes paid for that instead of a ChatGPT subscription.


This is not a good idea at all.

Government should not be in a position to directly and pervasively shape people’s understanding of the world.

That would be the infinite regress opposite of a free (in a completely different sense) press.

A non-profit providing an open data and training regime for an open WikiBrain would be nice. With standard pricing for scaled up use.


> Government should not be in a position to directly and pervasively shape people’s understanding of the world.

You disagree with national curricula, state broadcasters, publicly funded research and public information campaigns?


Many Americans these days absolutely do disagree with all of those things. Educated ones. Theres simply a short circuit belief based pathway in peoples brains that bypasses everything rational on arbitrary topics.

Most of us used to see it as isolated to religion or niche political points, but increasingly everything os being swept into the "its political" camp.


Given “national curricula” of a dominant democratic country are undergoing a politically motivated change, starting with significant web materials, and moving into education …

Do you prefer the previous narratives? The latter? Or whatever you are told?

And that is the risk of relatively static information.

What if your information source was interactive, adaptive and motivated? And could record and report on your interactions and viewpoints?


I've heard once that Americans distrust their government and trust their corporations, while Europeans distrust their corporations and trust their government. I honestly think that governments already has a huge role in shaping people's understanding of the world, and that's GREAT on good democratic countries.

What I find really weird is that I am stopping believing in the whole idea of free press, considering how the mainstream media is being bought by oligarchs around the globe. I think this is a good example of the erosion of trust in institutions in general. This won't end well.

Your idea of letting it be run by a non-profit makes me believe that you also don't trust institutions anymore.


I can’t say I have no trust for any institutions.

But my trust depends on each institutions choices. Just as my trust in people varies based on their records.

Mostly, I trust everyone to be who they show themselves to be. Which can lean one way or the other, or be very mixed across different dimensions.

But, yes, governments and corporations which are steadily centralizing power are inherently untrustworthy. As they at best, are making us all vulnerable and less powerful as individuals. Meaning they are decreasing our freedom, not increasing it.


Instead, we should let capitalism consolidate all power in the hands of the few, and let them directly and pervasively shape people's understanding of the world.

How would a non profit even be funded? That would just be government with extra steps.

No, capitalism giveth the LLMs and capitlism taketh the sales.


Were you responding to someone else?

For answers, just re-read my comment. Or, this:

1) Avoiding centralization is exactly why government shouldn’t do this.

2) Why did you raise a false dichotomy of government vs. commercial centralization?

I proposed an open solution, which is non-commercial with decentralized control.

3) Funding?: Have you heard of Wikipedia?

People often donate to prominent tools they use.

And, as I pointed out, there is an even more reliable source.

The necessity for scaled automated access creates an inevitable need for uniform openly set pricing for scaled up access.

I nice case where non-profit open access is naturally self-funded.


That's because a government-run LLM would be like government-run media.

High inflation? No, the government LLM will tell us we're doing great.


this assumes that "the government" is "us" and not "them"...


I sort of already had an experience where it did kinda. I was consulting with it potential fashion choices to upgrade my work uniform, to look professional but still creative, basically to look more like a creative director. It recommended brands, colors, styles etc. Then I was asking about eyeglass frames showed it three pictures, described my facial features and it was like "you have to buy this one now" more enthusiastic than expected. It wasn't ads or anything but there was a bit of salesyness in there.


or more generally than just ads: make you believe stuff that makes you act in ways that is detrimental to you, but benefitial to them (whoever sits in the center and can control and shape the LLM).

i.e. the Nudge Unit on steroids...

care must be taken to avoid that.


I can envision this as being in the boots of Truman from Truman Show where some advertisement is thrown at your face randomly


It's also inevitable that better and better open source models will be distilled as frontier models advance.


I agree. I think the local models you can run on the "average computer" are not quite good enough yet, but I have hope that we will see much better small local models in the future.


Right, this is what happened with search engines. And "SEO for LLMs" is already a thing.


Enshittification is always inevitable in a capitalist world, but not always easy to predict how it will happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: