I wouldn't be surprised if they essentially just add it to the prompt. ("You are ChatGPT... You are talking to a user that prefers cats over dogs and is afraid of spiders, prefers bullet points over long text...").
I think RAG approach with Vector DB is more likely. Just like when you add a file to your prompt / custom GPTs.
Adding the entire file (or memory in this case) would take up too much of the context. So just query the DB and if there's a match add it to the prompt after the conversation started.
These "memories" seem rather short, much shorter than the average document in a knowledge base or FAQ, for example. Maybe they do get compressed to embedding vectors, though.
I could imagine that once there's too many, it would indeed make sense to classify them as a database, though: "Prefers cats over dogs" is probably not salient information in too many queries.
My hunch is that they summarize the conversation periodically and inject that as additional system prompt constraints.
That was a common hack for the LLM context length problem, but now that context length is "solved" it could be more useful to align output a bit better.
I've done similar before this feature launched to produce a viable behavior therapist AI. I ain't a doctor, viable to me was: it worked and remembered previous info as a base for next steps.
Periodically "compress" chat history into relevant context and keep that slice of history as part of the memory.
15 day message history could be condensed greatly and still produce great results.