Getting useful output from AI tools requires more than a quick question. The quality of AI-generated content is almost entirely determined by the quality of context and prompting you provide. Poor prompting produces generic, detectable AI content; good prompting produces work that's indistinguishable from expert human writing.
These practices were surfaced during an internal ops sync on 2025-09-30 and reflect hands-on experience across multiple client content projects. See also: [1].
The most common mistake is jumping straight to the writing request. Before asking AI to produce anything, ensure it genuinely understands the subject matter.
The warm-up technique:
1. Give the AI the client's website URL and say only: "Go to this website and tell me what this company does."
2. Read its summary. If it's accurate, proceed. If not, correct it before continuing.
3. Introduce the specific topic: "Now we're going to talk about [specific product/service]. What do you know about this?"
4. Only once you're confident it understands the domain, ask it to write.
This coaxing process ensures the AI isn't pattern-matching to generic industry content — it's working from actual knowledge of the client.
When you have source material (emails, briefs, decks, past content), don't try to summarize it in a prompt. Feed the documents directly. Two tools make this easy:
When you don't have documents: Write a long, detailed prompt that covers who the client is, what they do, what the piece is for, and any specific details the AI couldn't know. The more context, the better.
AI tools will confidently invent statistics, citations, and facts if you let them. Prevent this explicitly:
Add this to your prompt:
"Do not include any facts, statistics, or specific claims unless you can support them with a citation."
This changes how the model writes. It will hedge appropriately or omit unsupported claims rather than fabricating them.
After receiving output, verify statistics:
Ask the AI directly: "Are all of the statistics you gave me accurate? Can you provide citations for each one?" It will often revise or retract figures it cannot support.
Running the same content through multiple AI tools improves quality and reduces detectable AI patterns:
This works especially well for code, but applies to any content type.
Different tools have different strengths. Using the wrong one for the job produces worse results. See [1] for a full breakdown, but in brief:
| Tool | Best For |
|---|---|
| Claude | Long-form writing, strategy, nuanced content, Projects with uploaded docs |
| ChatGPT | General tasks, code review, creative variation |
| NotebookLM | Research synthesis from your own document library |
| Perplexity | Internet research with citations, competitor discovery, finding what others say about a topic |
| Gemini | Google Workspace integration — creating Docs, Sheets, accessing Drive |
Perplexity note: Perplexity is essentially a high-speed, citation-backed browser. Every claim it makes links to a source. It's excellent for competitive research and finding industry data, but weak at creative or analytical thinking. Use it to find, not to write.
Gemini note: The only tool that can natively create Google Sheets or Docs. If your output needs to land in Google Workspace, Gemini is the right choice.
AI-generated content that gets sent to clients without review is a reputational risk. A client noticing "clearly AI" writing damages trust. Best practices:
An internal professional development session on AI tool usage is planned for October 2025 (after the client health check cycle). The session will include live walkthroughs of these techniques. Attendance is expected for account managers, Raphael, and Gavin.
Contact Isalia Ramirez to confirm scheduling.