This toolkit turns Extreme Contexting into reusable practice. Use it when working with AI on code, content, analysis, design, research, operations, or any AI-mediated work where speed needs to be governed by intent, context, and verification.
The manifesto defines the discipline. The toolkit exists for calibration: how to judge sufficiency, verify subjective work, manage context ledgers, choose models, and govern longer agentic runs.
This layer adds calibration tools, not more doctrine.
The operating loop
| Step | Action |
|---|---|
| Intent | Name the purpose of the work. |
| Sequence | Choose the next verifiable move. |
| Context | Assemble only the minimum sufficient context for that move. |
| Generate | Ask the AI to complete only that move. |
| Verify | Check the result against explicit expectations. |
| Refactor | Improve the output, the code, or the context system. |
| Capture | Preserve anything that should help the next cycle. |
The Move Card
The Move Card is the primary operating artifact of Extreme Contexting. Use it before asking AI to do meaningful work.
The Sufficiency Check
The Sufficiency Check produces the Context Packet. Before generating, ask:
- What is the job?
- What must the AI know?
- What must the AI not do?
- What examples define good output?
- What proves success?
- What context can be removed?
- What should be captured afterward?
Not minimal context. Minimum sufficient context. Enough to make the next correct move. Not enough to drift.
The Context Packet
A Context Packet is the bounded set of material given to the AI for a specific move. It may include: Move Card, relevant files, examples, constraints, source material, tests, acceptance criteria, architecture notes, prior decisions, known gotchas, style rules, validation rules.
Does this improve the next move? If not, leave it out.
Context is not storage. Context is instruction, evidence, boundary, and proof.
The Verification Contract
A Verification Contract defines what must be true before the AI output is accepted. Examples:
- All tests pass.
- No new dependencies are introduced.
- The patch changes only the requested files.
- The generated function matches the existing interface.
- The draft cites only approved sources.
- The article includes a decision table.
- The output does not recommend vendors.
- The migration is reversible.
- The summary preserves the original argument.
Verification can be automated or human-reviewed. The form matters less than the discipline. AI output is not accepted because it sounds right. It is accepted because it passes a defined check.
Verifying subjective work
Subjective work still needs pre-declared verification. The trick is not to eliminate judgment. The trick is to translate taste into observable features, exclusions, examples, and failure conditions before generation begins.
Verification Contract — editorial example
Move: Draft a homepage introduction for a technical methodology site.
Voice:
- Uses direct, declarative sentences.
- Avoids hype, metaphor stacking, and motivational language.
- Sounds like a practitioner explaining a hard-won operating rule.
- Does not use corporate phrases like "unlock value," "drive transformation," or "seamless innovation."
Structure:
- Opens with the practical problem, not a definition.
- Introduces one governing distinction in the first 150 words.
- Includes one concrete operational example.
- Ends with a decision or action, not a summary.
Failure conditions:
- Reads like generic thought leadership.
- Uses AI hype language.
- Makes claims without an example.
- Adds concepts not present in the brief.
- Sounds polished but does not clarify the work.
The "Do Only This Move" prompt frame
This is the most reusable prompt pattern in the toolkit.
Here is the move: [Move]
Here is the intent: [Intent]
Here is the context: [Context Packet]
Here are the exclusions: [What not to do]
Here is the expected output: [Output]
Here is how the result will be verified: [Verification Contract]
Do only this move.
That final sentence matters. It prevents the model from combining discovery, implementation, refactoring, documentation, and strategy into one uncontrolled pass.
The Agentic Sequence Contract
A Move Card governs one bounded step. An Agentic Sequence Contract governs a bounded run. The sequence contract does not replace Move Cards; it declares which moves may be created or executed, where the agent must stop, and what verification gates must pass before continuing.
For pairing, the constraint is Do only this move. For agentic work, the constraint becomes Do only this declared sequence, and stop at the defined gates.
Three practitioner stances
| Stance | Question |
|---|---|
| Designer | What is the next verifiable move? |
| Editor | What is the minimum sufficient context for that move? |
| Verifier | What proves the move worked? |
Model fit and sufficiency
Model choice is part of the context system.
Model choice runs in two directions:
- Move → Model: What kind of model behavior does this move require?
- Model → Context: Given this model's behavior, what must the context packet make more explicit?
Calibration examples
Exploratory draft model:
- Give a sharper output boundary.
- Include stronger exclusions.
- Expect useful variation.
- Verify against direction, structure, and failure conditions.
Strict instruction-following model:
- Make the desired structure explicit.
- Avoid ambiguous tradeoff language.
- Provide examples of acceptable output.
- Verify completeness and constraint adherence.
Long-context model:
- Use indexes and routing anyway.
- Do not load everything by default.
- Watch for stale or contradictory context.
Code-editing model:
- Include exact files, tests, commands, and protected paths.
- Require diff-level verification.
- Prefer smaller blast radius.
When switching models, re-check sufficiency. Context that works for one model may be too thin, too noisy, or too implicit for another.
Decomposing large work
- Name the outcome. What should be true when the work is done?
- Identify the seams. Where can the work be divided safely?
- Identify the risky unknowns. What could make this fail?
- Convert unknowns into discovery moves. What must be understood before implementation?
- Convert behavior into verification contracts. What must be proven before acceptance?
- Implement in thin slices. What is the smallest useful change that can be inspected?
- Refactor after behavior is locked. What structure can be improved once correctness is proven?
The Move Card is domain-neutral. The examples below show how the same structure applies to different kinds of AI-mediated work: software changes, editorial production, and decision-support analysis.
Software example
Bad move
Build the authentication system.
Better sequence
- Summarize the current authentication flow.
- Identify where permissions are enforced.
- List existing tests related to roles.
- Write failing tests for the missing permission behavior.
- Implement the smallest change that passes those tests.
- Refactor duplicated permission logic.
- Update the auth notes with the new invariant.
Move Card example
Editorial production example
Bad move
Write the article.
Better sequence
- Define the article's job in the portfolio.
- Identify the reader and decision the article should support.
- Gather only the approved source material needed.
- Draft the opening claim and section structure.
- Generate the article from the brief and source constraints.
- Validate citations, audience fit, structure, and boundary drift.
- Revise only the failed sections.
- Convert the final draft into the delivery artifact.
- Capture any recurring failure as a rule, gotcha, or validator update.
Move Card example
Decision-support analysis example
Bad move
Analyze the vendor options.
Better sequence
- Define the decision the analysis must support.
- Identify the decision maker and acceptance criteria.
- Gather only the relevant source material.
- Separate facts, assumptions, risks, and unknowns.
- Draft a comparison matrix.
- Verify claims against source material.
- Identify unresolved questions.
- Produce the recommendation only after evidence is checked.
- Capture any recurring ambiguity as a better evaluation rule.
Move Card example
Bad ask → sequenced moves → bounded context → risk-aware output → verification → capture. The artifact changes. The discipline does not.
Failure diagnostics
When AI output fails, inspect the context before simply asking again.
- Was the move too large?
- Was the intent ambiguous?
- Was the wrong file included?
- Was a necessary file missing?
- Were examples contradictory?
- Were exclusions missing?
- Was verification undefined?
- Did the model have to infer the sequence?
- Did the context contain stale instructions?
- Was the output accepted for plausibility instead of proof?
Repeated failure is a context design signal.
The Context Ledger
Use a Context Ledger to preserve durable lessons.
A ledger is a staging area for context refactoring, not permanent context.
Lifecycle
- Capture: Record the failure, correction, or discovery.
- Review: Decide whether the entry is recurring, still relevant, and specific enough to act on.
- Promote: Move repeated lessons into rules, tests, validators, examples, architecture notes, or workspace conventions.
- Prune: Remove duplicate, stale, low-signal, or overly broad entries.
- Archive: Preserve history without keeping it in active context.
Entry template
Example entry
Anti-patterns
| Anti-pattern | Failure mode | Correction |
|---|---|---|
| Prompt stuffing | More context creates drift. | Cut to sufficiency. |
| Mega-tasking | AI infers sequence and scope. | Define the next move. |
| Vibe acceptance | Plausibility replaces proof. | Add verification. |
| Context accretion | Old instructions pile up. | Refactor the context. |
| Chat dependency | Work becomes non-repeatable. | Capture durable artifacts. |
| Spec theater | Long specs don't guide action. | Convert to Move Cards. |
| Premature refactoring | Structure changes before behavior is proven. | Lock behavior first. |
| Unbounded autonomy | Model decides purpose and acceptance. | Humans own intent. |
The cure is usually: smaller move, sharper context, clearer verification, durable capture.
Monday-morning checklist
Use this when starting real work.
- 01Write the desired outcome in one sentence.
- 02Choose the next verifiable move.
- 03Fill out a Move Card.
- 04Run the Sufficiency Check.
- 05Assemble the Context Packet.
- 06Define the Verification Contract.
- 07Ask the AI to complete only that move.
- 08Review the result against the contract.
- 09Refactor the output or the context.
- 10Capture anything that should persist.
- 11Choose the next move.
Not a giant prompt. Not a magic workflow. A disciplined loop.