Hacker Newsnew | past | comments | ask | show | jobs | submit | NBJack's commentslogin

I'll do you one better: WTH is/was Glitch? I think I'm so far out of the loop I've reached lagrange point 2.

Low code tool plus hosting platform, and also the final form of Fog Creek which you may have heard about from Joel on Software blog posts if you read tech blogs 15 years ago

How many cuils are we talking about?

I would like to report an instance of heavy Baader-Meinhof as just yesterday I randomly wondered how many cuils are actually genuinely achievable in simple text and it's two at most IMO.

In other words, they learn the game, not how to play games.

They memorize the answers not the process to arrive at answers

They learn the value of specific actions in specific contexts based on the rewards they received during their play time. Specific actions and specific contexts are not transferable for various reasons. John quoted that varying frame rates and variable latency between action and effect really confuse the models.

Okay, so fuzz the frame rate and latency? That feels very easy to fix.

Good point, you should write to John Carmack and let him know you've figured out the problem.

This has been disproven so many times... They clearly do both. You can trivially prove this yourself.

> You can trivially prove this yourself.

Given the long list of dead philosophers of mind, if you have a trivial proof, would you mind providing a link?


Just go and ask ChatGPT or Claude something that can't possibly be in its training set. Make something up. If it is only memorising answers then it will be impossible for it to get the correct result.

A simple nonsense programming task would suffice. For example "write a Python function to erase every character from a string unless either of its adjacent characters are also adjacent to it in the alphabet. The string only contains lowercase a-z"

That task isn't anywhere in its training set so they can't memorise the answer. But I bet ChatGPT and Claude can still do it.

Honestly this is sooooo obvious to anyone that has used these tools, it's really insane that people are still parroting (heh) the "it just memorises" line.


LLMs don't "memorize" concepts like humans do. They generate output based on token patterns in their training data. So instead of having to be trained on every possible problem, they can still generate output that solves it by referencing the most probable combination of tokens for the specified input tokens. To humans this seems like they're truly solving novel problems, but it's merely a trick of statistics. These tools can reference and generate patterns that no human ever could. This is what makes them useful and powerful, but I would argue not intelligent.

> To humans this seems like they're truly solving novel problems

Because they are. This is some crazy semantic denial. I should stop engaging with this nonsense.

We have AI that is kind of close to passing the Turing test and people still say it's not intelligent...


Depending on the interviewer, you could make a non-AI program pass the Turing test. It's quite a meaningless exercise.

Obviously I mean for a sophisticated interviewer. Not nonsense like the Loebner prize.

The Turing test is contrived to chatting via textual interface.

These machines are only able to output text.

It seems hard to think they could reasonably think any -normal- person.

Tech only feels like magic if you don't know how it works


> Because they _are_.

Not really. Most of those seemingly novel problems are permutations of existing ones, like the one you mentioned. A solution is simply a specific permutation of tokens in the training data which humans are not able to see.

This doesn't mean that the permutation is something that previously didn't exist, let alone that it's something that is actually correct, but those scenarios are much rarer.

None of this is to say that these tools can't be useful, but thinking that this is intelligence is delusional.

> We have AI that is kind of close to passing the Turing test and people still say it's not intelligent...

The Turing test was passed arguably decades ago. It's not a test of intelligence. It's an _imitation game_ where the only goal is to fool humans into thinking they're having a text conversation with another human. LLMs can do this very well.


People who say that LLMs memorize stuff are just as clueless who assume that there's any reasoning happening.

They generate statistically plausible answers (to simplify the answer) based on the training set and weights they have.


What if that’s all we’re doing, though?

Most of us definitely do :)

Or we do it most of the time :)


It’s really easy: go to Claude and ask it a novel question. It will generally reason its way to a perfectly good answer even if there is no direct example of it in the training data.

When LLM's come up with answers to questions that aren't directly exampled in the training data, that's not proof at all that it reasoned its way there — it can very much still be pattern matching without insight from the actual code execution of the answer generation.

If we were taking a walk and you asked me for an explanation for a mathematical concept I have not actually studied, I am fully capable of hazarding a casual guess based on the other topics I have studied within seconds. This is the default approach of an LLM, except with much greater breadth and recall of studied topics than I, as a human, have.

This would be very different than if we sat down at a library and I applied the various concepts and theorems I already knew to make inferences, built upon them, and then derived an understanding based on reasoning of the steps I took (often after backtracking from several reasoning dead ends) before providing the explanation.

If you ask an LLM to explain their reasoning, it's unclear whether it just guessed the explanation and reasoning too, or if that was actually the set of steps it took to get to the first answer they gave you. This is why LLMs are able to correct themselves after claiming strawberry has 2 rs, but when providing (guessing again) their explanations they make more "relevant" guesses.


I'm not sure what "just guessed" means here. My experience with LLMs is that their "guesses" are far more reliable than a human's casual guess. And, as you say, they can provide cogent "explanations" of their "reasoning." Again, you say they might be "just guessing" at the explanation, what does that really mean if the explanation is cogent and seems to provide at least a plausible explanation for the behavior? (By the way, I'm sure you know that plenty of people think that human explanations for their behavior are also mere narrative reconstructions.)

I don't have a strong view about whether LLMS are really reasoning -- whatever that might mean. But the point I was responding to is that LLMS have simply memorized all the answers. That is clearly not true under any normal meanings of those words.


LLMs clearly don't reason in the same way that humans or SMT solvers do. That doesn't mean they aren't reasoning.

How do you know it’s a novel question?

You have probably seen examples of LLMs doing the "mirror test", i.e. identifying themselves in screenshots and referring to the screenshot from the first person. That is a genuinely novel question as an "LLM mirror test" wasn't a concept that existed before about a year ago.

Elephant mirror tests existed, so it doesn’t seem all that novel when the word “elephant” could just be substituted for the word “LLM”?

The question isn't about universal novelty, but whether the prompt/context is novel enough such that the LLM answering competently demonstrates understanding. The claim of parroting is that the dataset contains a near exact duplicate of any prompt and so the LLM demonstrating what appears to be competence is really just memorization. But if an LLM can generalize from an elephant mirror test to an LLM mirror test in an entirely new context (showing pictures and being asked to describe it), that demonstrates sufficient generalization to "understand" the concept of a mirror test.

How do you know it’s the one generalizing?

Likely there has been at least one text that already does that for say dolphin mirror tests or chimpanzee mirror teats.


It's not exactly difficult to come up with a question that's so unusual the chance of it being in the training set is effectively zero.

And as any programmer will tell you: they immediately devolve into "hallucinating" answers, not trying to actually reason about the world. Because that's what they do: they create statistically plausible answers even if those answers are complete nonsense.

Can you provide some examples of these genuinely unique questions?

I'm not sure what you mean by "genuinely." But in the coding context LLMs answer novel questions all the time. My codebase uses components and follows patterns that an LLM will have seen before, but the actual codebase is unique. Yet, the LLM can provide detailed explanations about how it works, what bugs or vulnerabilities it might have, modify it, or add features to it.

It must not have existed prior in any text database whatsoever.

It certainly wasn't. The codebase is thousands of lines of bespoke code that I just wrote.

Which pretty much every line in it was written similarly somewhere else before, including an explanation and is somehow included in the massive data set it was trained on.

So far i have asked the AI some novel questions and it came up with novel answers full of hallucinated nonsense, since it copied some similarly named setting or library function and replaced a part of it's name with something i was looking for.


And this training data somehow includes an explanation of how these individual lines (with variable names unique to my application) work together in my unique combination to produce a very specific result? I don't buy it.

And...

> pretty much

Is it "pretty much" or "all"? The claim that the LLM simply has simply memorized all of its responses seems to require "all."


yeahhhh why isnt there a training structure where you play 5000 games, and the reward function is based on doing well in all of them?

I guess its a totaly different level of control: instead of immediately choosing a certain button to press, you need to set longer term goals. "press whatever sequence over this time i need to do to end up closer to this result"

There is some kind of nested multidimensional thing to train on here instead of immediate limited choices


Well yeah... If you only ever played one game in your life you would probably be pretty shit at other games too. This does not seem very revealing to me.

I am decent at chess but barely know how the pieces in Go move.

Of course, this because I have spent a lot of time TRAINING to play chess and basically none training to play go.

I am good on guitar because I started training young but can't play the flute or piano to save my life.

Most complicated skills have basically no transfer or carry over other than knowing how to train on a new skill.


But the point here is, if i gave you a guitar with a string more or less. Or a different shaped guitar, you could play it.

If i give you a chess set with dwarf themed pieces and different colored squares, you could play immediately.


I don't think thats true. If you'd only ever played Doom, I think you could play, say, counterstrike or half-life and be pretty good at it, and i think Carmack is right that its pretty interesting that this doesn't seem to be the case for ai models

Probably a reflexive action at this point. Ingrained into what's left of their soul I assume.

It literally wouldn't surprise me if when asked, the legal team simply responded "it's standing policy".


"soul"? Oracle never had one in the first place.

C'mon, SteamOS. You're so close to giving gamers and Windows-only application users a seamless experience on virtually any PC.

Not to sound cheeky, but which part?

The contempt for hiring American workers at an American company.

I admit the general aesthetic was off-putting and was a bit hard to get excited for in the previews.

Looking at the producer's prior work, I can see why it wouldn't appeal to me: I wasn't a fan of Coco, The Good Dinosaur, or Brave either.

It didn't help that Brad Garrett's voice just didn't seem to ..fit?.. their character, despite the rest of the cast.


Then count the costs. If it is worth more to you to leave such feedback and improve their world, it's always your choice.

However, you should also be either convinced that HR gives a crap, or that any potential outcomes are acceptable, including but not limited to being moved into "unregulated attrition" status, losing the ability to be hired by the same company in the future, having your words potentially turned into a lawsuit against you, etc. Unless you have actual, legal, signed documentation in place giving you such assurances, these are all on the table.


It is this sort of fear that holds society back. Individualistic thinking and a belief that one cannot make a difference anyways allows so much bad behavior to take place. With everything in life, you should always try to leave a place better than how you found it.

Please don't fall for the clickbait here. Take a look at the history of posts from this site.

And now they've introduced a tight coupling and some serious potential tech debt should the original ever change its internal workings.

Also, you seem to assume we're talking about open source here.


It's also not how file formats generally work or get defined. I'm not sure I'd even define a .pickle file that way either, and it's the poster child for a hyper-specific Python format.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: