Low code tool plus hosting platform, and also the final form of Fog Creek which you may have heard about from Joel on Software blog posts if you read tech blogs 15 years ago
I would like to report an instance of heavy Baader-Meinhof as just yesterday I randomly wondered how many cuils are actually genuinely achievable in simple text and it's two at most IMO.
They learn the value of specific actions in specific contexts based on the rewards they received during their play time. Specific actions and specific contexts are not transferable for various reasons. John quoted that varying frame rates and variable latency between action and effect really confuse the models.
Just go and ask ChatGPT or Claude something that can't possibly be in its training set. Make something up. If it is only memorising answers then it will be impossible for it to get the correct result.
A simple nonsense programming task would suffice. For example "write a Python function to erase every character from a string unless either of its adjacent characters are also adjacent to it in the alphabet. The string only contains lowercase a-z"
That task isn't anywhere in its training set so they can't memorise the answer. But I bet ChatGPT and Claude can still do it.
Honestly this is sooooo obvious to anyone that has used these tools, it's really insane that people are still parroting (heh) the "it just memorises" line.
LLMs don't "memorize" concepts like humans do. They generate output based on token patterns in their training data. So instead of having to be trained on every possible problem, they can still generate output that solves it by referencing the most probable combination of tokens for the specified input tokens. To humans this seems like they're truly solving novel problems, but it's merely a trick of statistics. These tools can reference and generate patterns that no human ever could. This is what makes them useful and powerful, but I would argue not intelligent.
Not really. Most of those seemingly novel problems are permutations of existing ones, like the one you mentioned. A solution is simply a specific permutation of tokens in the training data which humans are not able to see.
This doesn't mean that the permutation is something that previously didn't exist, let alone that it's something that is actually correct, but those scenarios are much rarer.
None of this is to say that these tools can't be useful, but thinking that this is intelligence is delusional.
> We have AI that is kind of close to passing the Turing test and people still say it's not intelligent...
The Turing test was passed arguably decades ago. It's not a test of intelligence. It's an _imitation game_ where the only goal is to fool humans into thinking they're having a text conversation with another human. LLMs can do this very well.
It’s really easy: go to Claude and ask it a novel question. It will generally reason its way to a perfectly good answer even if there is no direct example of it in the training data.
When LLM's come up with answers to questions that aren't directly exampled in the training data, that's not proof at all that it reasoned its way there — it can very much still be pattern matching without insight from the actual code execution of the answer generation.
If we were taking a walk and you asked me for an explanation for a mathematical concept I have not actually studied, I am fully capable of hazarding a casual guess based on the other topics I have studied within seconds. This is the default approach of an LLM, except with much greater breadth and recall of studied topics than I, as a human, have.
This would be very different than if we sat down at a library and I applied the various concepts and theorems I already knew to make inferences, built upon them, and then derived an understanding based on reasoning of the steps I took (often after backtracking from several reasoning dead ends) before providing the explanation.
If you ask an LLM to explain their reasoning, it's unclear whether it just guessed the explanation and reasoning too, or if that was actually the set of steps it took to get to the first answer they gave you. This is why LLMs are able to correct themselves after claiming strawberry has 2 rs, but when providing (guessing again) their explanations they make more "relevant" guesses.
I'm not sure what "just guessed" means here. My experience with LLMs is that their "guesses" are far more reliable than a human's casual guess. And, as you say, they can provide cogent "explanations" of their "reasoning." Again, you say they might be "just guessing" at the explanation, what does that really mean if the explanation is cogent and seems to provide at least a plausible explanation for the behavior? (By the way, I'm sure you know that plenty of people think that human explanations for their behavior are also mere narrative reconstructions.)
I don't have a strong view about whether LLMS are really reasoning -- whatever that might mean. But the point I was responding to is that LLMS have simply memorized all the answers. That is clearly not true under any normal meanings of those words.
You have probably seen examples of LLMs doing the "mirror test", i.e. identifying themselves in screenshots and referring to the screenshot from the first person. That is a genuinely novel question as an "LLM mirror test" wasn't a concept that existed before about a year ago.
The question isn't about universal novelty, but whether the prompt/context is novel enough such that the LLM answering competently demonstrates understanding. The claim of parroting is that the dataset contains a near exact duplicate of any prompt and so the LLM demonstrating what appears to be competence is really just memorization. But if an LLM can generalize from an elephant mirror test to an LLM mirror test in an entirely new context (showing pictures and being asked to describe it), that demonstrates sufficient generalization to "understand" the concept of a mirror test.
And as any programmer will tell you: they immediately devolve into "hallucinating" answers, not trying to actually reason about the world. Because that's what they do: they create statistically plausible answers even if those answers are complete nonsense.
I'm not sure what you mean by "genuinely." But in the coding context LLMs answer novel questions all the time. My codebase uses components and follows patterns that an LLM will have seen before, but the actual codebase is unique. Yet, the LLM can provide detailed explanations about how it works, what bugs or vulnerabilities it might have, modify it, or add features to it.
Which pretty much every line in it was written similarly somewhere else before, including an explanation and is somehow included in the massive data set it was trained on.
So far i have asked the AI some novel questions and it came up with novel answers full of hallucinated nonsense, since it copied some similarly named setting or library function and replaced a part of it's name with something i was looking for.
And this training data somehow includes an explanation of how these individual lines (with variable names unique to my application) work together in my unique combination to produce a very specific result? I don't buy it.
And...
> pretty much
Is it "pretty much" or "all"? The claim that the LLM simply has simply memorized all of its responses seems to require "all."
yeahhhh why isnt there a training structure where you play 5000 games, and the reward function is based on doing well in all of them?
I guess its a totaly different level of control: instead of immediately choosing a certain button to press, you need to set longer term goals. "press whatever sequence over this time i need to do to end up closer to this result"
There is some kind of nested multidimensional thing to train on here instead of immediate limited choices
Well yeah... If you only ever played one game in your life you would probably be pretty shit at other games too. This does not seem very revealing to me.
I don't think thats true. If you'd only ever played Doom, I think you could play, say, counterstrike or half-life and be pretty good at it, and i think Carmack is right that its pretty interesting that this doesn't seem to be the case for ai models
Then count the costs. If it is worth more to you to leave such feedback and improve their world, it's always your choice.
However, you should also be either convinced that HR gives a crap, or that any potential outcomes are acceptable, including but not limited to being moved into "unregulated attrition" status, losing the ability to be hired by the same company in the future, having your words potentially turned into a lawsuit against you, etc. Unless you have actual, legal, signed documentation in place giving you such assurances, these are all on the table.
It is this sort of fear that holds society back. Individualistic thinking and a belief that one cannot make a difference anyways allows so much bad behavior to take place. With everything in life, you should always try to leave a place better than how you found it.
It's also not how file formats generally work or get defined. I'm not sure I'd even define a .pickle file that way either, and it's the poster child for a hyper-specific Python format.
reply