Hacker Newsnew | past | comments | ask | show | jobs | submit | anon_anon12's commentslogin

Hell no. There's so much friction in setting up OpenClaw to be able to utilise it efficiently. Then the security concerns. I'd in no way want my daily driver to do something with my data that I didn't want it to do.

Agreed. There's only so much a probabilistic model can do. It's all the effect of the copious amount of data it's fed, none of it is its own generated intelligence and somehow majority of the world doesn't seem to understand that. Generative Intelligence is NOT the future.

I can imagine Musk selling these very models with AI slapped onto them and call it revolutionary

I can envision this as being in the boots of Truman from Truman Show where some advertisement is thrown at your face randomly

True, it was always greed veiled with an illusion of passion

People's trust on LLM imo stems from the lack of awareness of AI hallucinating. Hallucination benchmarks are often hidden or talked about hastily in marketing videos.


I think it's better to say that LLMs only hallucinate. All the text they produce is entirely unverified. Humans are the ones reading the text and constructing meaning.


[flagged]


To quote Luke Skywalker: Amazing. Every word of what you just said is wrong.


Which is why I keep saying that anthropomorphizing LLMs gives you good high-order intuitions about them, and should not be discouraged.

Consider: GP would've been much more correct if they said "It's just a person on a chip." Still wrong, but much less, in qualitative fashion, than they are now.


No, it does not, it just adds to the risk that you'd be fooled by them or the corporations that produce them and surveil you through their SaaS-models.

It's a person in the same sense as a Markov chain is one, or the bot in the reception on Starship Titanic, i.e. not at all.


Just a weird little guy.



Similar analogy, yes.

FWIW, I prefer my "little people on a chip" because this is a deliberate riff on SoC, aka. System on a Chip, aka. an actual component you put when designing computer systems. The implication being, when you design information processing systems, the box with "LLM" on it should go where you'd consider putting a box with "Person" on it, not where you'd put "Database" or any other software/hardware box.


No, it is not. It's a funny way of compressing and querying data, nothing else.


It is probabilistic unlike a database which is not. It is also a lossy way to compress data. We could go on about the differences but those two things make it not a database.

Edit: unless we are talking about MongoDB. It will only keep your data if you are lucky and might lose it. :)


No, it is still just a database. It is a way to store and query information, it is nothing else.

It's not just the weirdness in Mongo that could exhibit non-deterministic behaviour, some common indexing techniques do not guarantee order and/or exhaustiveness.

Let it go, LLM:s and related compression techniques aren't very special, and neither are chatbots or copy-paste-oriented software development. Optimising them for speed or manipulation does not change this, at least not from a technical perspective.


> It's just a database. There is no difference in a technical sense between "hallucination" and whatever else you imagine.

It's like a JPEG. Except instead of lossy compression on images that give you a pixel soup that only vaguely resembles the original if you're resource bound (and even modern SOTA models are when it comes to LLMs), instead you get stuff that looks more or less correct but just isn't.


It would be like JPEG if opening JPEG files involved pushing in a seed to get an image out. It's like a database, it just sits there until you enter a query.

This comes from not having a specific area or understanding, if you ask it about an area you know well, you'll see.


I get what you're saying but I think it's wrong (I also think it's wrong when people say "well, people used to complain about calculators...").

An LLM chatbot is not like querying a database. Postgres doesn't have a human-like interface. Querying SQL is highly technical, when you get nonsensical results out of it (which is most often than not) you immediately suspect the JOIN you wrote or whatever. There's no "confident vibe" in results spat out by the DB engine.

Interacting with a chat bot is highly non-technical. The chat bot seems to many people like a highly competent person-like robot that knows everything, and it knows it with a high degree of confidence too.

So it makes sense to talk about "hallucinations", even though it's a flawed analogy.

I think the mistake people make when interacting with LLMs is similar to what they do when they read/watch the news: "well, they said so on the news, so it must be true."


No, it does not. It's like saying 'I talk to angels' because you hear voices in the humming from the ventilation.

It's precisely like a database. You might think the query interface is special, but that's all it is and if you let it fool you, fine, go ahead, keep it public that you do.


lol, chances of the same person using that kind of phrase and an em dash is so marginally low


Well in my opinion there's nothing wrong with vibe-coding. You can completely use it to make your passion projects. I draw the line when people try to sell their vibe-coded project as something huge, putting people at the risk of potential security breaches while also taking money out of them.

Every other day I see ads of companies saying "use our AI and become a millionaire", this kind of marketing from agentic IDEs implies no need for developers who know their craft, which as said above, isn't the case.


Totally agree. I have my day job, and vibe-coding has simply brought back the joy of building things for me. It should be about passion and creativity, not about scamming people or overselling half-baked products. The "get rich quick with AI" narrative is toxic.


Why is this getting downvoted? Genuinely curious.


Because the user is using a LLM to generate these comments, there are three so far here.


Fair, but the threat model matters here. For a static mortgage calculator, the data leak risk is zero (if it's client-side). The risk here is different - logical. If the AI botches the formula and someone makes a financial decision based on that - that's the problem. For "serious" projects vibe coding must stop where testing and code audits begin


Yep including that too obviously, but OP isn't trying to market this I think, just sharing his passion project


Why do you think vibe code isn’t good enough for real products? Just so long as you have tests that show it functions as expected, why does it matter?


>> Just so long as you have tests that show ...

this by definition filters out all non-devs, even many junior devs as you need to understand deeply if those tests are correct and cover all important edge cases etc.

+ when you deploy it - you need to know it was properly deployed and your db creds are not on frontend.

But mostly no one cares as there is no consequences to leaking personal data of your users or whatnot.


I think once you are asking for people’s money you should know what is going on pretty thoroughly. But that’s just my two cents :)


> I think once you are asking for people’s money you should know what is going on pretty thoroughly.

If that’s the bar, there likely a ton of businesses that should shut down…


I think vibe coding isn't quite good enough for real products because I usually have 4 AI agents going non-stop. And I do read the code (I read so, so much code), and I give the AI plenty of feedback.

If you just want to build a little web app, or a couple of screens for your phone, you'll probably be fine. (Unless there's money or personal data involved.) It's empowering! Have fun.

But if you're trying to build something that has a whole bunch of moving parts and which isn't allowed to be a trash fire? Someone needs to be paying attention.


Well the dialogue there involves two or more people, when commenting, why would you use that.. Even if you have collaborators, you wouldn't very likely be discussing stuff through code comments..


It was outrageous at start, especially in 2016, but surely after AI's boom, we are heading towards it. People have stopped becoming genuine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: