AI has traditionally been driven by "metaphor-driven development" where people assume the brain has system X, program something they give the same name, and then assume because they've given it that name it must work because it works in the brain.
This is generally a bad idea, but a few of the results like "neural networks" did work out… eventually.
"World model" is another example of a metaphor like this. They've assumed that humans have world models (most likely not true), and that if they program something and call it a "world model" it will work the same way (definitely not true) and will be beneficial (possibly true).
(The above critique comes from Phil Agre and David Chapman.)
This is generally a bad idea, but a few of the results like "neural networks" did work out… eventually.
"World model" is another example of a metaphor like this. They've assumed that humans have world models (most likely not true), and that if they program something and call it a "world model" it will work the same way (definitely not true) and will be beneficial (possibly true).
(The above critique comes from Phil Agre and David Chapman.)