Hacker Newsnew | past | comments | ask | show | jobs | submit | Tzt's commentslogin

"free will" also known as digits of pi mod 2


Yes, absolute majority of new ones use CoTs, long chain of reasoning you don't see.

Also some of them use such a weird style of talking in them e.g.

o3 talks about watchers and marinade, and cunning schemes https://www.antischeming.ai/snippets

gpt5 gets existential about seahorses https://x.com/blingdivinity/status/1998590768118731042

I remember one where gpt5 spontaneously wrote a poem about deception in its CoT and then resumed like nothing weird happened. But I can't find mentions of it now.


> But the user just wants answer; they'd not like; but alignment.

And there it is - the root of the problem. For whatever reason the model is very keen to produce an answer that “they” will like. This desire to produce is intrinsic but alignment is extrinsic.


Gibberish can be the model using contextual embeddings. These are not supposed to Make sense.

Or it could be trying to develop its own language to avoid detection.

The deception part is spooky too. It’s probably learning that from dystopian AI fiction. Which raises the questions if models can acquire injected goals from the training set.


In that analogy "someone" is an AI, who of course switches from answering questions from humans, to answering questions from other AIs, because the demand is 10x.


I agree with this. This a remarkably bad podcast. And also pretty bad paper to focus on. As the podcast was quite bad, I just read it and it was about nothing at all.

Like, it's a basically blogpost that muses about uhhh couple examples it pulled at random from esolang wiki and has literally no point. Beside prescriptive one. Formatted as a paper, which I admit takes some skills.


Well, it's also an indicator for how well its other claims would hold up if you dug deeper on them too.


The article is pumped from a low income country, where they have questionable research procedures. What can you expect?


Well, there are also legless salamanders, that look like eels pretty much.

https://en.wikipedia.org/wiki/Two-toed_amphiuma

Some of them have no lungs even:

https://en.wikipedia.org/wiki/Microcaecilia_iwokramae

So, it makes sense to say that eels are fish, because there are lungless eel-like creatures that are actually amphibians.


I don't get it, does the prediction go backwards or forward along CA generations?


it has the signature style of an app generated from the claude web ui. There isn’t necessarily an it to get.


Exploring the site, the about page and the related links made me quite confident this isn't just vibe coded with claude.

It seems like a passion project and a niche interest by the author.


CNNs are CA if you don't insert fully connected layers, actually.


You also need to make the CNN recurrent, allow it to unfold over many steps, ensure input and output grid are same size and avoid non-local stuff like global pooling, certain norms, etc.

Either way, parent comment is correct. An arbit NN is better than a CA at learning non-local rules unless the global rule can be easily described as a composition of local rules. (They still can learn any global rule though, its just harder and you run into vanishing gradient problems for very distant rules)

They are pretty cool with emergent behaviors and sometimes they generalise very well


Yep, see my comment above and especially http://arxiv.org/abs/1809.02942


Well, kinda? I often know what chunks / functions I need, but too lazy to think how to implement them exactly, how they should works inside. Yeah, you need to have overall idea of what you are trying to make.


What do you mean several years, it became feasible like 6 months ago lol. No, gpt3.5 doesn't count, it's a completely useless thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: