What AI remembers and what it forgets
What AI forgets
Some AI tools now remember things across conversations — your name, your preferences, broad patterns. But this memory is selective and shallow. Within a single conversation, AI has a limited “context window” — the amount of information it can hold at once. Long conversations can overflow this window, and AI starts losing track of earlier details. The context window is still the core constraint.
This two-step exercise will make the memory limitation feel real. It takes about two minutes.
My name is Alex, I work at a company called Brightpath, and I'm working on a project called Project Lighthouse that aims to redesign our customer onboarding flow. The project is in week 3 of 8.
Paste this into your AI and have a short conversation about the project. Ask it a follow-up question or two — maybe about priorities or risks. It’ll respond intelligently because everything is in the current conversation.
Based on what I told you about Project Lighthouse, what should my priorities be for next week?
Now open a brand new conversation and paste this. If your AI has no idea what you’re talking about — that’s the memory wall. If it does remember something (some tools now have cross-chat memory), notice what it remembers vs. what it forgot. It might recall your name but not the specific week number, the project timeline, or the details of your conversation. That’s the difference between shallow memory and deep context.
Try checking your AI’s memory settings to see what it’s stored about you. In ChatGPT: Settings → Personalization → Memory. In Claude: it learns automatically over time — you can view and manage what it knows in Settings.
Most major AI tools now have some form of cross-conversation memory. Here’s how they compare — but keep in mind these features are evolving fast.
| Feature | ChatGPT | Claude | Gemini |
|---|---|---|---|
| Cross-chat memory | Yes (Memory) | Yes (learns over time) | Limited |
| What it remembers | High-level facts, preferences | Patterns, preferences | Varies |
| What it forgets | Full conversation details | Specific instructions | Most context |
| How to manage | Settings → Memory | Happens automatically | Settings varies |
Memory features will keep improving, but for now, these three habits will save you from a lot of frustration.
Break big projects into smaller conversations. Each conversation should have a clear, focused goal rather than trying to do everything in one thread. Think “draft the introduction” instead of “write the whole report.”
Re-state important context at the start. When you start a new conversation about an ongoing project, give the AI a brief recap of where things stand. Two or three sentences is usually enough to get it up to speed.
Use your custom instructions () as a baseline. It gives every conversation your identity and preferences automatically — so you only need to add project-specific context.
Takeaway: AI starts fresh every conversation. Memory features are getting better, but they capture preferences, not full context. For now, be prepared to re-state important details and break large projects into focused conversations.
The “context window” is measured in tokens (roughly words). Early models had ~4K tokens (~3,000 words). Current models handle 100K–200K+ tokens (~75K–150K words). That sounds like a lot, but a long conversation with back-and-forth can fill it quickly.
Here’s the mental model: imagine a whiteboard that can only hold so much text. As the conversation goes on, new messages get written at the bottom. When the whiteboard fills up, earlier messages get erased from the top. The AI can only “see” what’s currently on the board.
When the window fills, earlier messages get “pushed out” and the model loses track. Some tools manage this automatically by summarizing older messages. Others simply lose context without warning.
The practical takeaway: for important details, repeat them — don’t assume the AI remembers something from 20 messages ago. If a detail is critical to your current request, include it again, even if you mentioned it earlier. It’s a small habit that prevents a lot of confusion.
We work alongside your team to build AI-native workflows — from one-week sprints to full engineering acceleration. No handoffs, no slide decks.
Talk to us