I’ve been thinking about how to do this well, how my memory actually works. I think what is happening is I’ve either got the facts now (that is easy to repro w/ a system like this) or I’ve got an idea that I could have the facts after working on retrieval. It’s like I’ve got a feeling or sense that somewhere in cold storage is the info I need so I kick off a background process to get it. Sometimes it works.
That second system, the “I know this…” system is I think what is missing from these LLMs. They have the first one, they KNOW things they’ve seen during training, but what I think is missing is the ability to build up the working set as they are doing things, then get the “feeling” that they could know this if they did a little retrieval work. I’ve been thinking about how to repro that in a computer where knowledge is 0|1, but could be slow to fetch
You've identified a fundamental gap - that meta-cognitive "I could retrieve this" intuition that humans have but LLMs lack.
Our graph approach addresses this:
- Structure knowledge with visible relationship patterns before loading details
- Retrieval system "senses" related information without fetching everything
- Temporal tracking prioritizes recent/relevant information
- Planning recall frequency tracking for higher weightage on accessed facts
In SOL(personal assistant), we guide LLMs to use memory more effectively by providing structured knowledge boundaries. This creates that "I could know this if I looked" capability.
That second system, the “I know this…” system is I think what is missing from these LLMs. They have the first one, they KNOW things they’ve seen during training, but what I think is missing is the ability to build up the working set as they are doing things, then get the “feeling” that they could know this if they did a little retrieval work. I’ve been thinking about how to repro that in a computer where knowledge is 0|1, but could be slow to fetch