---
title: Verbalized Sampling Technique
type: concept
created: '2026-04-05'
updated: '2026-04-05'
source_docs:
- raw/2025-11-13-using-ai-part-2-101398141.md
tags:
- ai
- prompting
- creative
- brainstorming
- technique
layer: 2
client_source: null
industry_context: null
transferable: true
---

# Verbalized Sampling Technique

## Overview

Verbalized sampling is an AI prompting technique where you explicitly request a specific number of **distinct** outputs in a single prompt. The name draws an analogy to statistical sampling: just as a data scientist specifies a sample size to get a representative spread, you tell the AI how many ideas you want and demand that they be genuinely different from one another.

The technique was demonstrated by [[people/mark-hope|Mark Hope]] in an internal training session on advanced AI prompting. It is the foundation of a broader two-part method that also includes [[wiki/knowledge/ai-tools/probability-control-technique|Probability Control]] (tail sampling).

## The Problem It Solves

When you give an AI a simple, open-ended prompt ("give me some ad copy ideas"), it defaults to the highest-probability outputs — the most statistically common answers in its training data. The result is what Mark Hope calls **"AI slop"**: generic, predictable content that looks identical to what every other agency or competitor is generating. This commoditizes your output the same way commoditization hurts clients who fail to differentiate.

Verbalized sampling forces the model to spread its outputs across a wider region of its probability distribution rather than clustering around the obvious center.

## The Technique

### Core Structure

Every verbalized sampling prompt has three components:

1. **A specific count** — "Give me five…" (or ten, or twenty-five; five is a practical default)
2. **A diversity keyword** — "distinct," "different," or "unique" to signal that variants of the same idea are not acceptable
3. **A no-repetition constraint** — "Don't repeat the same core idea with minor tweaks"

### Example Prompt Skeleton

```
I need [N] [distinct / different / diverse] [ideas / solutions / pieces of copy] for [topic].
Generate [N] [distinct] options.
For each option, provide [what you want].
Ensure the outputs cover a wide range.
Do not repeat the same core idea with minor variations.
```

### Why the Diversity Keyword Matters

Asking for "five ideas" alone can still yield five variations on the same theme. Adding "distinct" or "different" instructs the model to sample broadly across its knowledge base — spanning different disciplines, angles, emotional registers, or business models — rather than generating minor permutations of one idea.

Once you have five distinct ideas, you *can* then ask for five variations of a single chosen idea to narrow in. But start broad.

## Pairing with Probability Scores

Verbalized sampling tells you *how many* ideas to get and *how different* they should be. It does not tell you *how original* they are. Always pair it with a probability request:

> "For each option, assign a probability from common to rare showing how expected or unconventional it is."

This surfaces which outputs are still predictable (e.g., 65%) versus genuinely unusual (e.g., 10%). See [[wiki/knowledge/ai-tools/probability-control-technique|Probability Control Technique]] for the next step: deliberately sampling from the low-probability tail.

## Applications

| Use Case | Example Prompt Fragment |
|---|---|
| **Rapid brainstorming** | "I need five diverse ideas for how to launch a reverse ATM business…" |
| **Problem solving** | "I need five different ways to approach this. Each from a different angle or discipline." |
| **Ad copy** | "Generate five completely different pieces of ad copy… ensure no two versions sound similar." |
| **Future scenario planning** | "Give me five radically different scenarios. Don't just give me the one down the middle." |

## Real Examples from Training

During the session, Mark Hope ran verbalized sampling against three client-adjacent scenarios:

- **Reverse ATM launch** — produced ideas ranging from "Retail Chain Integration" (65% probability) to "Gig Worker Financial Hub" (30%) to a "Nonprofit Donation Conversion Network"
- **Flynn Audio stagnation** — produced five solutions spanning technical product innovation, financial engineering (equipment-as-a-service), human-centric experience design, strategic market repositioning, and a "Trojan Horse Data Company" IoT concept
- **E-bike retailer ad copy** — produced angles including intellectual superiority, protective parenting, and anti-establishment rebellion, with typicality scores ranging from 5–25%

## Tool Selection Note

Different AI models respond differently to verbalized sampling prompts. Claude tends to produce more varied and "out-there" outputs. ChatGPT skews toward the center of the bell curve. Grok and Gemini have their own tendencies. Treat each model as a different collaborator with different strengths, and test the same prompt across tools when the stakes are high.

## Important Caveats

- **Verify before using.** AI can reproduce existing slogans, taglines, or concepts verbatim. Michał Bielerzewski noted that three of six AI-generated slogans from the session already existed online. Always search and, for brand work, run trademark checks.
- **AI as stimulus, not authority.** The goal is to accelerate and expand human creative thinking, not to replace it. The best output is a starting point for refinement, not a finished deliverable.
- **Framing for clients.** If clients question AI-assisted work, focus on output quality, not process. A carpenter isn't judged by the brand of hammer used.

## Action Items from Session

- [ ] Mark Hope to send prompt-template document to Melissa Cusumano, Gilbert Barrongo, and Michał Bielerzewski
- [ ] All team members to practice verbalized sampling and probability control in daily work

## Related

- [[wiki/knowledge/ai-tools/probability-control-technique|Probability Control Technique]]
- [[wiki/meetings/2025-11-13-using-ai-part-2|Meeting: Using AI Part 2 (Nov 13 2025)]]