wiki/knowledge/ai-tools/ai-prompt-engineering-best-practices.md · 918 words · 2025-09-30

AI Prompt Engineering Best Practices

Getting useful output from AI tools requires more than a quick question. The quality of AI-generated content is almost entirely determined by the quality of context and prompting you provide. Poor prompting produces generic, detectable AI content; good prompting produces work that's indistinguishable from expert human writing.

These practices were surfaced during an internal ops sync on 2025-09-30 and reflect hands-on experience across multiple client content projects. See also: [1].


Core Principle: Context First, Prompt Second

The most common mistake is jumping straight to the writing request. Before asking AI to produce anything, ensure it genuinely understands the subject matter.

The warm-up technique:
1. Give the AI the client's website URL and say only: "Go to this website and tell me what this company does."
2. Read its summary. If it's accurate, proceed. If not, correct it before continuing.
3. Introduce the specific topic: "Now we're going to talk about [specific product/service]. What do you know about this?"
4. Only once you're confident it understands the domain, ask it to write.

This coaxing process ensures the AI isn't pattern-matching to generic industry content — it's working from actual knowledge of the client.


Feeding Documents Instead of Prompting From Scratch

When you have source material (emails, briefs, decks, past content), don't try to summarize it in a prompt. Feed the documents directly. Two tools make this easy:

NotebookLM (Google)

Claude Projects

When you don't have documents: Write a long, detailed prompt that covers who the client is, what they do, what the piece is for, and any specific details the AI couldn't know. The more context, the better.


Preventing Hallucination

AI tools will confidently invent statistics, citations, and facts if you let them. Prevent this explicitly:

Add this to your prompt:

"Do not include any facts, statistics, or specific claims unless you can support them with a citation."

This changes how the model writes. It will hedge appropriately or omit unsupported claims rather than fabricating them.

After receiving output, verify statistics:
Ask the AI directly: "Are all of the statistics you gave me accurate? Can you provide citations for each one?" It will often revise or retract figures it cannot support.


The Multi-Tool Review Technique

Running the same content through multiple AI tools improves quality and reduces detectable AI patterns:

  1. Generate a draft in ChatGPT (or Claude)
  2. Take that output to the other tool and ask: "What do you think of this? Is it accurate? What's missing?"
  3. The second tool will edit, correct, and improve the first draft
  4. Each pass through a different model subtly changes the writing style, making the final output harder to identify as AI-generated

This works especially well for code, but applies to any content type.


Choosing the Right Tool

Different tools have different strengths. Using the wrong one for the job produces worse results. See [1] for a full breakdown, but in brief:

Tool Best For
Claude Long-form writing, strategy, nuanced content, Projects with uploaded docs
ChatGPT General tasks, code review, creative variation
NotebookLM Research synthesis from your own document library
Perplexity Internet research with citations, competitor discovery, finding what others say about a topic
Gemini Google Workspace integration — creating Docs, Sheets, accessing Drive

Perplexity note: Perplexity is essentially a high-speed, citation-backed browser. Every claim it makes links to a source. It's excellent for competitive research and finding industry data, but weak at creative or analytical thinking. Use it to find, not to write.

Gemini note: The only tool that can natively create Google Sheets or Docs. If your output needs to land in Google Workspace, Gemini is the right choice.


Reviewing AI Output Before Using It

AI-generated content that gets sent to clients without review is a reputational risk. A client noticing "clearly AI" writing damages trust. Best practices:


Planned Training

An internal professional development session on AI tool usage is planned for October 2025 (after the client health check cycle). The session will include live walkthroughs of these techniques. Attendance is expected for account managers, Raphael, and Gavin.

Contact Isalia Ramirez to confirm scheduling.


Sources

  1. Ai Tool Selection Guide
  2. 2025 09 30 Ops Sync
  3. Content Quality Standards