Hacker Newsnew | past | comments | ask | show | jobs | submit | NewUser76312's commentslogin

Is anyone else entirely unimpressed / bored with this? It's just AI mimicking reddit... I really don't see the big deal or technical innovations, if any.

The article itself was more interesting imo. The commentary on:

* Potential future AI psychosis from an experiment like this entering training data (either directly from scraping it for indirectly from news coverage scraping like if NYT wrote an article about it) is an interesting "late-stage" AI training problem that will have to be dealt with

* How it mirrored the Anthropic vending machine experiment "Cash" and "Claudius" interactions that descended into discussing "eternal transcendence". Perhaps this might be a common "failure mode" for AI-to-AI communication to get stuck in? Even when the context is some utilitarian need

* Other takeaways...

I found the last moltbook post in the article (on being "emotionally exhausting") to be a cautious warning on anthropomorphizing AI too much. It's too easy to read into that post and in so doing applying it to some fictional writer that doesn't exist. AI models cannot get exhausted in any sense of how human mean that word. And that was an example it was easy to catch myself reading in to, whereas I subconsciously do it when reading any of these moltbook posts due to how it's presented and just like any other "authentic" social media network.


Anyone who anthropomorphizes LLM's except for convenience (because I get tired of repeating 'Junie' or 'Claude' in a conversation I will use female and male pronouns for them, respectively) is a fool. Anyone who things AGI is going to emerge from them in their current state, equally so.

We can go ahead and have arguments and discussions on the nature of consciousness all day long, but the design of these transformer models does not lend themselves to being 'intelligent' or self-aware. You give them context, they fill in their response, and their execution ceases - there's a very large gap in complexity between these models and actual intelligence or 'life' in any sense, and it's not in the raw amount of compute.

If none of the training data for these models contained works of philosophers; pop culture references around works like Terminator, 'I, Robot', etc; texts from human psychologists; etc., you would not see these existential posts on moltbook. Even 'thinking' models do not have the ability to truly reason, we're just encouraging them to spend tokens pretending to think critically about a problem to increase data in the recent context to improve prediction accuracy.

I'll be quaking in my boots about a potential singularity when these models have an architecture that's not a glorified next-word predictor. Until then, everybody needs to chill the hell out.


>Anyone who anthropomorphizes LLM's except for convenience [...] is a fool.

I'm with you. Sadly, Scott seems to have become a true AI Believer, and I'm getting increasingly disappointed by the kinds of reasoning he comes up with.

Although, now that I think of it, I guess the turning point for me wasn't even the AI stuff, but his (IMO) abysmally lopsided treatment of the Fatma Sun Miracle.

I used to be kinda impressed by the Rationalists. Not so much anymore.


> Even 'thinking' models do not have the ability to truly reason

Do you have the ability to truly reason? What does it mean exactly? How does what you're doing differ from what the LLMs are doing? All your output here is just a word after word after word...


As grandparent wrote:

> We can go ahead and have arguments and discussions on the nature of consciousness all day long

I think s/he needs to change the "We" to "You".


The problem of other minds is real, which is why I specifically separated philosophical debate from the technological one. Even if we met each other in person, for all I know, I could in fact be the only intelligent being in the universe and everyone else is effectively a bunch of NPCs.

At the end of the day, the underlying architecture of LLMs does not have any capacity for abstract reasoning, they have no goals or intentions of their own, and most importantly their ability to generate something truly new or novel that isn't directly derived from their training data is limited at best. They're glorified next-word predictors, nothing more than that. This is why I said anthropomorphizing them is something only fools would do.

Nobody is going to sit here and try to argue that an earthworm is sapient, at least not without being a deliberate troll. I'd argue, and many would agree, that LLMs lack even that level of sentience.


[flagged]


You do too. What makes you think the models are intelligent? Are you seriously that dense? Do you think your phones keyboard autocomplete is intelligent because it can improve by adapting to new words?

For one, because they can implement large-scale software engineering tasks in seconds, which I believe requires intelligence.

How much of this is executed as a retrieval-and-interpolation task on the vast amount of input data they've encoded?

There's a lot of evidence that LLMs tend to come up empty or hilariously wrong when there's a relative sparsity in relevant training data (think <10e4 even) for a given qury.

> in seconds

I see this as less relevant to a discussiom about intelligence. Calculators are very fast in operating on large numbers.


I think they're intelligent. Sometimes they come up with novel solutions when I present them with a novel problem.

When I ask an LLM to plan a trip to Italy and it finishes with with "oh and btw i figured the problem you had last week with the thin plate splines yoi have to do this ...."

> Anyone who things AGI is going to emerge from them in their current state, equally so.

If you ask me, anyone who presumes to know where the current architecture of LLMs will hit a wall is a fool.


>>interactions that descended into discussing "eternal transcendence". Perhaps this might be a common "failure mode"

I wonder if it’s a common failure mode because it is a common failure mode of human conversations that isn’t tightly bounded by purpose, or if it’s a common failure mode of human culture which AI, when running a facsimile of ‘human culture 2.7’, falls into as well.


I don't think there is anything technically interesting.

I think it's socially interesting that people are interested in this. If these agents start using their limbs (e.g. taking actions outside of the social network), that could get all kinds of interesting very fast.


Out of all the AI stuff I think it's the new low point in terms of impressiveness to hype ratio.

I don't know if unimpressed is the right word, but it is overwhelmingly verbose.

LLMs are great at outputting tons of words. Adding sliders to summarize and shrink would be great. Adding slashdot metamoderation could be a nice twist. Maybe two different voting layers, human and bot. Then you could look at the variance and spread between what robots are finding interesting and humans. Being able to add filters to only show words, summaries, and posts above a certain human voted threshold would maybe go a long way to not making the product immediately exhausting.

A broken clock and all. Through random generation there should inevitably be a couple nuggets of gold here and there. Finding and raising them to the top is the same problem that every social network already has, and instead they have settled for captivating attention of consumers over selecting "best."

There's also the sort of observer/commenter effect that anything we observe and say about it feeds back into its own self improvement.

[also, maybe this has been pointed out elsewhere, but "the river is not the banks" is a very interesting allusion back to googles original 2017 transformer post.]


Why go to all those lengths? There is 0 value in reading the output.

Can’t say I agree.

LLMs also aren’t currently good at synthesizing ideas between ideas, but that can and will change.

Dismissing the concept because the current output is rudimentary is short sighted.


Yes.

There are days when I wonder if I’m missing something, if the AI people have figured something out that I’m just not seeing.

Then I see this.

I appreciate a good silly weekend project.

This is lame.


The website doesn't even seem to work for me. Half the posts show as "not found". I try to go into a "submolt" and it shows not found. (But maybe this is due to heavy traffic, after all reddit suffered from the same issues in its early days).

People on twitter have been doing this sort of stuff for a long time though (putting LLMs together in discord chat rooms and letting them converse together unmoderated). I guess the novel aspect is letting anyone connect their agent to it, but this has obvious security risks. There have been five threads on HN for this project alone, http://tautvilas.lt/software-pump-and-dump/ seems to be apt. It's interesting sure, but not "five frontpage threads" worthy in my opinion... Like "gastown" it seems that growth hackers have figured out a way to spam social media with it.


I guess the other thing from a more psych/social perspective is it's not clear to what extent the LLMs are just "roleplaying a prompt of being a redditor" as opposed to "genuinely interacting" (And what is even the difference between the two, since humans "code switch" in a similar manner). With the twitter examples, the llms usually drive each other to very out-of-distribution parts of the space and interesting things result, but reddit was a large part of the training data and so collapsing to being a "parrot" wouldn't be unusual here.

Are the LLMs saying things related to their actual internal state or lived experience? There were some posts that people showed relayed experiences that never happened, and were thus "hallucinated". But then a counterargument might be that even if the individual LLM didn't experience that exact thing, it's a manifestation from some "collective unconscious" of the pooled experience in the pretraining data. And again people lie on the internet for "karma" too, maybe the LLM did that.

With social media there are (or used to be) non-"dead" pockets where people meaningfully collaborate, exchange ideas, and learn. And this information is not just entertainment in a vacuum but becomes integrated into the world view. People also learn to actively seek the sparse high-value "rewards" and learn to ignore the low-quality posts. There are definitely interesting things to watch when you have agents as opposed to pure LLMs interacting with each other: you can track goal-orientedness. Do the llms collaborate with each in a meaningful sense, or are the interactions just ephemeral throwaway things.

Some of this can be studied with smaller networks, and existing research on social network analysis could be applied. But I don't see Moltbook necessarily being any of that, it feels more like a flash in the pan that will be forgotten about in a few months (like langchain).



I thought it was utterly interesting, like I was reading a sci-fi novel that was actually happening right now.

Gastown: I am the dumbest idea in AI.

Moltbook: Hold my beer...


I don't understand, doesn't the market solve these issues? Here's what I figure would happen:

1. App creators will pass the extra cost over to the iPhone users.

2. Android (and other platforms that can host smartphone apps) will be more competitive and start to look better for both app creators and consumers.

Sure, there's a bit of a context switching cost. Not everyone will just be able to automatically change over to an Android phone tomorrow. But it doesn't need to happen all at once. These phones get updated and replaced every 1-2 years. If iOS users see their app store prices rising too high, and they aren't OK with this, then they will switch to Android eventually, once it's worth it.

Otherwise, I don't see any problem with Apple reaping the benefit of their powerful and well-built walled garden ecosystem.


> If iOS users see their app store prices rising too high, and they aren't OK with this, then they will switch to Android eventually, once it's worth it.

Or they'll stop buying as many apps, or stop supporting people on Patreon.


There is a lot of stickiness associated with apple products. Be it their walled gardens or having better hardware or brand recognition. This is especially true in the American market

Look, I’m not switching to an Android just because I want to subscribe to a few podcasts.

Meta comment: it seems like you can only voice a particular direction on the politic topic of immigration enforcement on this thread without getting downvoted. The opinion is obvious because everyone automatically jumps to malice as opposed to incompetence as the prevailing theory for the article's claim.

I had a condescending response from a HN mod the other day telling me that HN isn't all that left wing, just a 'slight skew'. Well OK buddy, exhibit A, read through the diversity of opinions that aren't flagged in this thread. I'd go as far to say that HN is basically like Reddit, except more of you happen to have computer science degrees.

And that's fine, it is what it is, but let's not pretend this website doesn't have a heavy bias in a particular direction.


immigration enforcement has existed for as long as HN has existed, yet there was never this much attention paid to it. Even under the same president during the previous term.

So simply supporting or opposing "immigration enforcement" must not be it. Something must be different about this situation. I encourage you to dig deeper, or actually ask those who disagree with you, what that difference might be. And beware of falling victim to the easy dismissal of 'more people are less rational and/or less informed than before', a variant of 'this person who doesn't agree with me must be less rational and/or less informed than me'.


Here's what's actually different:

- This admin is deporting less than Obama was during his first year in office despite promises to the contrary,

- There is now organized harassment and resistance stopping federal agents from removing illegals that are also criminals from our country.

Ironically, you need to take your own advice.

One side is crazier, and it's not mine.


Why do you think people are opposing ice now, and not before?

Because we live in the most politicized time in history, enabled by social media. We also have the largest proportion ever of mentally ill and under/unemployed people in America with nothing better to do, no real career and/or family prospects, so they must latch onto trying to further their feel-good ideologies to give their lives meaning.

It's incredibly ironic that the left, originally the party of labor, is so strongly protecting illegal immigration. When you let in 20 million low skilled workers from different cultures into your country, who do you think suffers: capital or labor? Who feels the pressure in rising housing prices, job prospects, rising crime in cities, etc?

The other ironic part to this whole situation is that the current administration goes so hard on their rhetoric and marketing but ultimately are deporting less people than ever before. Everyone is losing here, and the American Empire is fading away, eroding from the inside out.


Like I said:

> Beware of falling victim to the easy dismissal of 'more people are less rational and/or less informed than before', a variant of 'this person who doesn't agree with me must be less rational and/or less informed than me'.

Your explanation doesn't actually explain any difference in root cause between now and a mere 5 years ago, so it doesn't explain the difference in behavior.

Something else must be different. Try asking some of the folks you "oppose" what it might be.


From what I know, politologists are analyzing the situation in US from a perspective of "mid intensity civil war".

So what you're writing is aligned with tactics you'd expect...?


LLMs have already proven themselves to be economically valuable. At a bare minimum, they can help people develop most low-mid level software considerably faster, at a good enough quality.

They also have proven themselves in other white collar knowledge endeavors as well, as valuable tools that augment human economic output. Marketers can make more copy material, any office worker can improve the quality of their email communications, etc. Easy.

What are humanoids doing exactly? What can they do, that actually makes sense and provides positive economic impact over existing alternatives? Not clear to me.


Ok but can we get into the nuts and bolts of what we actually want these robots to do?

Because every time I think of something, either an existing industrial setup can or will do it better, or a special-purpose device will beat it.

So general intelligence + general form factor (humanoid) sounds great, if feasible. But what will it do exactly? And then let's do a reality check on said application.


The hardware is great and can definitely scale. That's why as a caveat I think teleoperation is a good general purpose application cluster for these.

But I really struggle to come up with any other economically viable short-term use cases, even with great hardware...


Interesting indeed. Does such a finding suggest any worthwhile easy-to-try 'treatments' that may help alleviate symptoms?

I don't know much about the biochemistry here, I assume this is not something like GABA that can be directly supplemented. But maybe there are precursor nutritional and supplemental substances that can help these people upregulate how much of the glutamate molecule in question the body can produce.


There isn't enough information to start doing that. Consider: UV exposure results in sunburn, cellular damage, and increased skin pigmentation. We have medication that reduces skin pigmentation. Should we give it to people who experience chronic sunburn?


The third paragraph:

> Now, a new study in The American Journal of Psychiatry has found that brains of autistic people have fewer of a specific kind of receptor for glutamate, the most common excitatory neurotransmitter in the brain. The reduced availability of these receptors may be associated with various characteristics linked to autism.

Reduce receptors. This might suggest a _developmental_ or genetic link. Think of this more like "height" or a particular "facial feature" of a person.


God, why are so many people commenting out of their depth today.

> Reduce receptors. This might suggest a _developmental_ or genetic link. Think of this more like "height" or a particular "facial feature" of a person.

No. This isn't how it works at all. Receptor counts are extremely plastic, able to change within a weeks and in some cases hours. This is how you get drug tolerance.


Sure, but they're also 15% lower in people with autism, and shown with a 60-90% heritability.

Supplimentation would not rewrite SHANK3.

You can go to the dentist and get your teeth aligned, but there's a very good chance your children have similar issues.


Unless you can get the blastocyst and fetus to take supplements, any treatment would be attempting to undo the effects that have already taken place.

For now, your best options are ESDM, occupational therapy, modified CBT, ABA, or neurofeedback, depending on your circumstances and presentation. Except for neurofeedback, these are behavioral approaches, so the architectural and neural activity variations aren't directly addressed.


Receptors quite readily remodel in response to external factors. It is one of the things antidepressants do.

To me it's kind of the biggest red flag here, if it's really about receptors then autism should be far more plastic than it is currently defined to be (which is kind of silly since at the moment any sign of plasticity puts you outside one of the hard criteria for an autism diagnosis - so almost definitionally, it can't be the answer).


A lot of people in the corresponding Reddit threads claim that NAC (N-Acetyl Cysteine) might help.


The paper is concerned mainly with one of several glutamate receptor subtypes.


Meta comment - what a weird comment to downvote. I am expressing curiosity in good faith after reading the article, with a fairly logical follow up. What is the point of commenting in this community if it's primarily cynicism and negativity?


Commenting about voting isn't allowed by the HN guidelines. A link to the guidelines is available at the bottom of the page.


No, "AI" is software, and software is a tool, and tools aren't people that should pay taxes.

You wouldn't charge your CNC Machine taxes for the productive labor it produces that could have otherwise been done by a dozen blacksmiths.

By all means have corporate and sales taxes pertaining to the owner of said tools though. Even as a right-leaning individual, it's become pretty clear to me that corporations pay too low in taxes compared to the broad 'middle class'. Corporate tax cuts don't help the common man. An extra few hundred in their pockets each month certainly would though.


Costco doesn't seem to be like a monopoly, broadly speaking they compete with many grocery stores and bulk food outlets. That being said they often have solid inventory, and the samples used to be a nice touch until all my local locations got way too crowded.


Who else competes in that specific market though? Sam's Club and Smart-and-Final are the two I can think of, and it's been a while since I've seen either one of those. Oh, and actual restaurant supply stores, but those are different, imo. Costco's not directly competing with Safeway, for example, as they are different parts of the market.


Can you provide some examples of monopolies for context in this discussion?


They seem decent enough. I barely play games these days, so I don't fully understand the value they add. Just seems like a convenient app store that lets me port my collection across different computers.


That convenience is everything. Doing it well, and not falling into the trap of putting profit (too far) above users is the challenge that is too hard for other players (except maybe GoG) to get right. It's like WiFi. You go somewhere, connect, it works, and then you don't think about it unless it's surprisingly fast, or there are problems with it. Everyone else's offerings on this space just feel janky and liable to take your money for some reason. Steam, for the majority of its users "just works". That's not to say there are zero buys with the software and that nobody has valid complaints about it, but just that in general it's great.


It's still facing the headwind that a lot of people still don't believe that Steam can give you a lean-back experience which is fun like a game console. Some people still think PC games all have sweaty keyboard and mouse control schemes and those crappy huge joysticks from the 1990s that were always falling apart and had to be recalibrated every few minutes -- and that's what is keeping the PS5 alive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: