Large language models like Claude operate within a context window — a finite memory space that holds everything said in a conversation. Understanding how context windows work, and how to manage them deliberately, is essential for complex, multi-step workflows like client strategy development, data analysis, and plan generation.
The core principle: everything in a single conversation is remembered; nothing from a previous conversation is. Managing this boundary is the difference between a productive session and losing hours of accumulated context.
When you start a conversation with Claude (or any LLM), you open a context window. Every message, file upload, AI response, and piece of data you share lives inside that window. The model can reference anything from earlier in the same conversation.
When you start a new conversation, the slate is blank. The model has no memory of prior sessions unless you explicitly re-introduce that information.
This means:
- Keep related work in one conversation. Don't split a client strategy session across multiple chats.
- The longer the conversation, the more context is consumed. Large file uploads, verbose AI responses, and lengthy back-and-forth all eat into the available window.
- As the window fills, quality can degrade. The model may begin compressing or losing track of earlier details before it hits the hard limit.
In practice, Claude has one of the largest context windows available, so hitting the hard limit in a single working session is uncommon — but degradation can begin well before the limit is reached when conversations contain many large file uploads or verbose outputs.
When a context window is nearing its limit, use this recovery technique:
summarize our work here — request a compact summary of all key findings, decisions, data points, and next steps established in the conversation.This preserves the essential intelligence of the session without carrying the full token weight of every prior exchange.
From the AdavaCare training session: Mark Hope demonstrated this live — noting that if the window nears its limit, you should "say summarize our work here, and then you can go to a new context window and paste that summary. So at least it knows something."
Establish the most important facts early: client background, business objectives, key metrics, and constraints. This ensures the model has strong grounding even if later context gets compressed.
When uploading multiple data files (e.g., Google Search Console exports, Google Analytics reports, Ahrefs data, Google Ads reports), instruct Claude to wait and acknowledge rather than generating full analysis after each upload. This conserves context for the synthesis phase.
Example prompt: "I'm going to upload several reports. Please acknowledge each one briefly and wait until I say 'go' before analyzing."
Claude handles structured data more reliably from Excel (.xlsx) and PDF formats than from raw CSV files. When exporting from tools like Google Analytics, Ahrefs, or Google Ads, prefer these formats to reduce parsing errors and wasted context on failed reads.
Verbose AI outputs consume context fast. Use directives like:
- "Be brief"
- "Bullet points only"
- "Summarize in 3 sentences"
Reserve detailed outputs for the specific deliverables you actually need.
Don't rely on the conversation as your only record. Periodically copy key outputs — plans, OKRs, ad copy, checklists — into a Google Doc or project file. This also gives you a clean artifact to share with clients or teammates without scrolling through a full chat.
| Source | Preferred Format | Notes |
|---|---|---|
| Google Search Console | Excel (.xlsx) | Export from Queries report |
| Google Analytics | Screenshot or export works well | |
| Ahrefs | Excel (.xlsx) or screenshot | CSV can cause parsing issues |
| Google Ads | Chart-only PDFs are not useful; use tabular data |