A Confusion That Keeps Getting Repeated
There is a persistent conflation in the AI discourse that I find genuinely troubling, not because it reflects ignorance, but because it obscures a problem that urgently needs solving. The conflation is this: when people hear that a language model now supports 128,000 tokens, or 200,000, or a million, they conclude that the model has "more memory." Some product announcements actively encourage this interpretation. It is, to put it plainly, wrong.
A larger context window is not more memory. It is more working space. These are fundamentally different cognitive functions, and confusing them leads to architectural decisions that look reasonable on the surface but collapse under real-world use.
Let me explain why the distinction matters, and why getting it right is critical for anyone building serious AI-assisted workflows.
Working Memory Is Not Long-Term Memory
Cognitive psychology has distinguished between working memory and long-term memory for over half a century, drawing on research from George Miller's seminal 1956 paper on the limits of short-term processing capacity through Alan Baddeley's multi-component model of working memory published in the 1970s and refined through the decades since. The distinction is not controversial in cognitive science. It is foundational.
Working memory is the cognitive system responsible for holding and manipulating information during active processing. It is limited in capacity, temporary by nature, and exists to serve the immediate task at hand. When you hold a phone number in your head while walking to find a pen, that's working memory. The moment you stop actively maintaining it, the information is gone.
Long-term memory, by contrast, is the system responsible for storing information across extended time periods, from hours to decades. It does not require active maintenance. It has, for practical purposes, unlimited capacity. And critically, it is organized, structured, and associative in ways that working memory is not.
The context window of a language model is a direct analog of working memory. It is the space available for active processing during a single interaction. When the interaction ends, the contents of the context window are discarded entirely. Nothing persists. Nothing is learned. Nothing is stored.
Making the context window larger is like giving someone better short-term recall. They can hold more information during a conversation. But when the conversation ends, they still remember nothing.
What the Papers Don't Tell You About Bigger Windows
There is an additional complication that deserves more attention than it typically receives. Larger context windows do not scale linearly in their usefulness. Research on the "lost in the middle" phenomenon, documented by Liu et al. in their 2023 paper, demonstrates that language models retrieve information from the beginning and end of their context windows more reliably than from the middle. As context windows grow, the middle becomes an increasingly large zone of degraded attention.
This means that a 128K token context window does not give you 128K tokens of equally accessible information. It gives you a window where some positions are attended to well and others are, functionally, partially forgotten even though they are technically "in memory." This is a well-documented limitation that has not been fully solved by any current architecture.
So when someone says "we don't need persistent memory, we have a million-token context," they are doubly wrong. They are wrong because a context window is not memory, and they are wrong because even as a context window, it doesn't perform uniformly across its full length.
The Properties of Real Memory
If context windows are working memory, what would genuine long-term memory look like in an AI system? Drawing on what cognitive science tells us about human long-term memory, several properties seem essential:
- Persistence without active maintenance. Information should survive beyond the end of a session without anyone having to manually save it or paste it into the next conversation. This is the most basic requirement and the one most visibly absent from current systems.
- Selective encoding. Not everything that passes through working memory should be committed to long-term storage. The human hippocampus acts as a filter, encoding experiences that are novel, emotionally significant, or repeatedly encountered, while allowing routine, unremarkable information to fade. An AI memory system needs an analogous filter.
- Consolidation and abstraction. Over time, specific episodic memories should contribute to more abstract, generalized knowledge. Research on memory consolidation, particularly the complementary learning systems theory proposed by McClelland, McNaughton, and O'Reilly, suggests that this process of gradual abstraction is essential for building flexible, transferable knowledge from specific experiences.
- Associative retrieval. Memories should be retrievable not only by explicit query but by association with the current context. When you encounter a familiar problem, relevant past experiences surface automatically. This is fundamentally different from vector similarity search, which retrieves based on surface-level semantic overlap rather than deep contextual relevance.
- Adaptive forgetting. As I've discussed in previous writing, forgetting is not a failure of memory but a feature. Robert Bjork's research on retrieval-induced forgetting and the new theory of disuse demonstrates that strategic forgetting improves the accuracy and efficiency of retrieval by reducing interference from outdated or irrelevant information.
Why This Distinction Matters Practically
This is not merely an academic exercise. The confusion between context windows and memory leads to concrete, costly mistakes in how teams design their AI workflows.
Teams that believe a large context window is sufficient memory tend to dump everything into the prompt. Entire codebases, full documentation, complete conversation histories. This approach hits practical limits quickly. Token costs scale linearly with prompt size. Latency increases. And as we discussed, attention degrades across long contexts, so the information you most need is often the information least attended to.
Teams that understand the distinction invest in dedicated memory infrastructure that sits alongside the model, not inside the context window. Relevant information is retrieved and injected at the right time, keeping the context window focused and efficient. Past experiences inform current interactions without consuming the entire processing budget.
Platforms like ChaozCode that implement memory as a separate, persistent layer understand this distinction architecturally. The context window does what it's good at: immediate processing. The memory layer does what it's good at: long-term storage, retrieval, and evolution. They work together, each doing the job it was designed for.
A Simple Test
If you want to determine whether your AI system has genuine memory or is merely exploiting a large context window, ask a simple question: if I close this conversation and open a new one tomorrow, will the AI know what we discussed?
If the answer is no, you don't have memory. You have a long conversation. And no matter how long that conversation is allowed to get, it will always end.
Toward a Clearer Vocabulary
I would like to propose, as a small contribution to clarity in this space, that we be more precise in our language. A context window is a processing buffer. Memory is a persistent store. A larger context window gives you more processing space. A memory system gives you accumulated experience. These are complementary capabilities, not synonyms.
The sooner the field internalizes this distinction, the sooner we can build AI systems that don't just process information impressively in the moment, but actually learn and grow from the interactions they have over time. And that is a goal worth pursuing with precision, not conflation.
Stop Your AI From Forgetting
Memory Spine gives your AI agents persistent memory that survives across sessions. Try it free.
Start Free — No Card Required