extremecontexting.com / docs / 02_manifesto.md

The Extreme Contexting Manifesto

What it believes
/docs/02_manifesto.md

When code is cheap, context becomes the craft — and sufficiency becomes the discipline.

AI has changed the cost of making things. Code, copy, tests, plans, interfaces, documentation, analysis, and designs can now be generated faster than humans can fully inspect, understand, or govern them. The bottleneck is no longer production. The bottleneck is purpose, sequencing, context, and verification.

The deeper mechanism: AI did not only make production cheaper. It made output exceed inspection bandwidth. Generation is no longer the scarce resource — human judgment applied at the right verification points is. Extreme Contexting is a governance response to that shift, not merely a productivity philosophy.

The mental model is simple:

The compiler metaphor

The environment is the compiler.
The model is the compiler engine.
Context is the source material.
Verification is the type check.

The metaphor is not deterministic. Models still vary, drift, and fail. Verification exists because the same context can still produce different outputs.

Extreme Contexting is a methodology for working with AI without surrendering judgment to it. It begins with a simple belief: the quality of AI output depends on the quality, shape, and sufficiency of the context surrounding the work.

The doctrine

Not more context. Not longer prompts. Not bigger asks.
Minimum sufficient context for the next verifiable move.

Principle 1Context is the craft

The prompt is not the main artifact. The real artifact is the context system: the intent, constraints, examples, source material, tests, acceptance criteria, file structure, naming conventions, prior decisions, exclusions, and review loops that shape what the model can safely produce.

Context is not storage. Context is instruction, evidence, boundary, and proof.

A strong practitioner does not merely ask better questions. A strong practitioner designs the conditions under which the model can make the next correct move.

Principle 2Sufficiency beats volume

More context is not better. Too little context forces the model to guess. Too much context invites drift, contradiction, overfitting, and false confidence.

Extreme Contexting seeks sufficiency: the least context required for the AI to produce a useful, reviewable, verifiable result. Every piece of context must earn its place. If it does not clarify the move, constrain the work, improve the output, or support verification, it is noise.

Sufficiency is not a measurable property known before the work begins. It is a standard the loop converges toward through generation, verification, failure, subtraction, and capture. This is why the loop exists.

Principle 3The move is the unit of work

AI should not be asked to "do the project." It should be asked to advance the next verifiable move. A move is the smallest useful step that creates an inspectable artifact.

A good move has one intention, one primary output, a bounded context packet, a clear verification condition, and a reversible or reviewable result. The hard part of AI work is not getting the model to produce. The hard part is deciding what the next move should be.

Principle 4Sequence before context

Sufficiency is impossible without sequencing. Before deciding what context the AI needs, decide what move it is making. Large work must be decomposed by uncertainty, risk, and verification — not by vague ambition.

Do not begin with: What should I prompt? Begin with: What is the next move that can be verified? Then ask: What is the minimum sufficient context for that move?

Principle 5Verification precedes trust

AI output is not accepted because it sounds right. It is accepted because it passes a defined check. Verification may be a test suite, a diff review, a rubric, a validator, a source check, a user story, a design constraint, or a working demo.

Without verification, AI work becomes aesthetic plausibility. Extreme Contexting rejects vibe-based acceptance.

Principle 6Failures are context signals

When AI fails, the first response should not be to prompt harder. Repeated failure usually means the context system is broken. Maybe the intent was vague. Maybe the task was too large. Maybe the model had contradictory examples. Maybe the acceptance criteria were missing. Maybe the wrong files were included. Maybe the model was asked to infer sequencing.

Extreme Contexting treats failure as information. A bad output is often a diagnosis of bad context.

When diagnosing context failure, prefer subtraction before addition. Stale, contradictory, excessive, or overfit context is more often the cause than missing context. The reflex to add more instructions after a failure is the most common way sufficiency gets abandoned.

Principle 7Refactor the context

Traditional refactoring improves code without changing behavior. Extreme Contexting goes further: when the output fails, improve the system that produced the output.

Do not merely correct the artifact. Refactor the context system that generated it: remove stale instructions, split overloaded files, promote repeated guidance into reusable rules, delete misleading examples, clarify constraints, sharpen acceptance criteria, capture decisions where the human and model can both see them.

If the same correction is made twice, it belongs in the context system. Context ledgers are temporary — repeated entries graduate into rules, validators, configuration, or archive. A growing ledger is a signal that refactoring is overdue.

Principle 8Artifacts beat memory

Do not rely on the chat thread to hold the process together. Durable work needs durable context. The important things should live in inspectable artifacts: briefs, move cards, tests, specs, examples, decision records, validation reports, source files, and checklists.

Memory hidden inside a conversation is fragile. Artifacts make the work repeatable, reviewable, and improvable.

Principle 9Humans own intent

AI can generate options, produce code, summarize, refactor, test, compare, and explain. But AI does not own purpose. Humans own the why, the tradeoff, the boundary, the risk, and the decision to accept.

Extreme Contexting does not replace judgment. It protects judgment from being overwhelmed by speed.

Principle 10The loop
Intent Sequence Context Generate Verify Refactor Capture

Intent names the purpose. Sequence identifies the next verifiable move. Context supplies only what that move requires. Generate lets the AI produce the artifact. Verify checks the result against explicit expectations. Refactor improves the code, output, or context. Capture turns the lesson into durable context for future work.

The loop compounds. Every good cycle should make the next cycle easier.

Governing question

What is the minimum sufficient context required for the next verifiable move?

That question replaces prompt stuffing with discipline. It replaces vague delegation with sequencing. It replaces plausible output with verified progress. It turns AI from a magic box into a working system.

Closing

AI makes production cheap. But cheap production without context creates waste at machine speed. Extreme Contexting is the discipline of shaping purpose, sequence, context, and verification before generation begins.

Its origin is its first proof case: Extreme Contexting was developed by practicing Extreme Contexting.
← 01_origin.md