I am building a graph memory too, and I agree with you. It is almost useless to generate triplets, and instead generate nodes that are usually statement strings. And can extend up to a short paragraph too.
I have strong opinions that memory should be a graph + vector hybrid. The vector store can store and index information as a cognitive fragment( ex. all things related to my house), and can keep editing it as a set of statements, while that node can be associated with other nodes (ex. my renovation plans, budgeting, etc.), because those are separate fragments. I am also using LLM to consolidate and find new patterns across the connected memories
> But one problem I see with these memory systems is that they can reduce interest on a topic once we put it in the KB.
I find the associative nature of our memory can only be represented as a graph, but we definitely need a vector store.
I like to pose this as an identity question rather than memory as colloquially memory is only associated with fact extraction and retrieval, while identity extends that to traits, behavior, preferences, memory and narrative causal relations.
essentially a self organization of symbols and memories to produce a coherent singular entity.
90 percent of the AI companion use cases today can work well with just a vector DB to retrieve facts, and chunks of memory, but a connected digital footprint would need a graph+vector hybrid.
memory in the coming future will not just be about fact retrieval but need backlinks of memetics, new streams of data, holistic analysis, infinite schema-less key value store, causal reasoning and other things that define "who and why" of a human and imitate neuroscience's understanding of how our identities work today. this then needs to be translated as language chunks to LLMs
benchmarking this against popular tools, on longmemeval. getting good results so far. i would love to learn from you guys, what's your take on identity and human representation for LLMs in the coming future
Surely, keeping a 1:1 match between our perception of the universe and the universe itself is a kind of order that would be constantly under threat by the inexorable move towards increasing entropy. The lack of match either now or in the future is what you would expect. Chaos is the expect state, order is the exception. Hence you should not expect for that match to exist by default
I read recently an article on HN that basically entropy (and hence increase thereof) is just a statistical model of uncertainty of the things we don't know. As soon as our understanding (of e.g. positions and velocity of individual molecules) increases, that what we call entropy, decreases. At least my understanding.
You can be born with your perception. For example, opposite sexes generally attract and you don’t really have to teach this. We’re born to see things the same, but somehow we work to obstruct this by modding our perception. When one person mods their perception, they become god ( a creator). For example, you aren’t the tallest one in the room but you see yourself as that. You create a reality, and this is something people are addicted to and causes all the harm in the world (someone is manipulating our shared perception with their lie).
Reality is a repository that we must all be good maintainers of. Beware the false PR (delusions).
Which brings me to the the author’s article. Many creatures on earth, past and present, omit or ignore all the little details. They live a lie that is just as intricate as the details. Details matter if the details matter to you. Humans are world builders, and they will reshape the details to see what they want. So will a snake, its own tail can look like its food if necessary.
Why choose a graph data structure ? Is it better for memory than AI agents as opposed to vector embeddings of statements or having a NoSQL database? I am bullish on graph, but seems like noone else is, so want to understand your perspective on this
Graph DBs (KGs) add a much needed structure to unstructured data fed into traditional RAG. They are more holistic (interconnectedness of all things) than traditional relational DBs. That comes with 3 advantages;
1. They find insights you didn’t know you needed. Like discovering both Bob and Alice like to play basketball & paint the wilderness, so they could be a good match!
2. KGs are simple to read by humans, so are perfect for natural language processing.
3. KGs are dead simple and scalable. No complex tables and referencing. Just dots and lines. So LLMs have an easy time understanding.
I really think graph data structures is the missing piece to fixing RAG, but it hasn’t been mass adopted because, building quality graphs from unstructured data is HARD. But we’re getting there!
I have strong opinions that memory should be a graph + vector hybrid. The vector store can store and index information as a cognitive fragment( ex. all things related to my house), and can keep editing it as a set of statements, while that node can be associated with other nodes (ex. my renovation plans, budgeting, etc.), because those are separate fragments. I am also using LLM to consolidate and find new patterns across the connected memories
> But one problem I see with these memory systems is that they can reduce interest on a topic once we put it in the KB.
Can you elaborate please?
reply