wiki/knowledge/ai-tools/verbalized-sampling-technique.md · 866 words · 2026-04-05

Verbalized Sampling Technique

Overview

Verbalized sampling is an AI prompting technique where you explicitly request a specific number of distinct outputs in a single prompt. The name draws an analogy to statistical sampling: just as a data scientist specifies a sample size to get a representative spread, you tell the AI how many ideas you want and demand that they be genuinely different from one another.

The technique was demonstrated by [1] in an internal training session on advanced AI prompting. It is the foundation of a broader two-part method that also includes [2] (tail sampling).

The Problem It Solves

When you give an AI a simple, open-ended prompt ("give me some ad copy ideas"), it defaults to the highest-probability outputs — the most statistically common answers in its training data. The result is what Mark Hope calls "AI slop": generic, predictable content that looks identical to what every other agency or competitor is generating. This commoditizes your output the same way commoditization hurts clients who fail to differentiate.

Verbalized sampling forces the model to spread its outputs across a wider region of its probability distribution rather than clustering around the obvious center.

The Technique

Core Structure

Every verbalized sampling prompt has three components:

  1. A specific count — "Give me five…" (or ten, or twenty-five; five is a practical default)
  2. A diversity keyword — "distinct," "different," or "unique" to signal that variants of the same idea are not acceptable
  3. A no-repetition constraint — "Don't repeat the same core idea with minor tweaks"

Example Prompt Skeleton

I need [N] [distinct / different / diverse] [ideas / solutions / pieces of copy] for [topic].
Generate [N] [distinct] options.
For each option, provide [what you want].
Ensure the outputs cover a wide range.
Do not repeat the same core idea with minor variations.

Why the Diversity Keyword Matters

Asking for "five ideas" alone can still yield five variations on the same theme. Adding "distinct" or "different" instructs the model to sample broadly across its knowledge base — spanning different disciplines, angles, emotional registers, or business models — rather than generating minor permutations of one idea.

Once you have five distinct ideas, you can then ask for five variations of a single chosen idea to narrow in. But start broad.

Pairing with Probability Scores

Verbalized sampling tells you how many ideas to get and how different they should be. It does not tell you how original they are. Always pair it with a probability request:

"For each option, assign a probability from common to rare showing how expected or unconventional it is."

This surfaces which outputs are still predictable (e.g., 65%) versus genuinely unusual (e.g., 10%). See [3] for the next step: deliberately sampling from the low-probability tail.

Applications

Use Case Example Prompt Fragment
Rapid brainstorming "I need five diverse ideas for how to launch a reverse ATM business…"
Problem solving "I need five different ways to approach this. Each from a different angle or discipline."
Ad copy "Generate five completely different pieces of ad copy… ensure no two versions sound similar."
Future scenario planning "Give me five radically different scenarios. Don't just give me the one down the middle."

Real Examples from Training

During the session, Mark Hope ran verbalized sampling against three client-adjacent scenarios:

Tool Selection Note

Different AI models respond differently to verbalized sampling prompts. Claude tends to produce more varied and "out-there" outputs. ChatGPT skews toward the center of the bell curve. Grok and Gemini have their own tendencies. Treat each model as a different collaborator with different strengths, and test the same prompt across tools when the stakes are high.

Important Caveats

Action Items from Session

Sources

  1. Mark Hope|Mark Hope
  2. Probability Control Technique|Probability Control
  3. Probability Control Technique|Probability Control Technique
  4. 2025 11 13 Using Ai Part 2|Meeting: Using Ai Part 2 (Nov 13 2025)