Extreme Contexting did not begin as a theory. It emerged from practice.
While building AI-mediated production systems, I kept encountering the same problem: AI could generate quickly, but speed alone did not make the work reliable. Reliability came from the structure around the model — the briefs, constraints, source rules, validators, file conventions, review states, handoffs, and accumulated lessons from prior failures.
The pattern was not discovered by reading about AI workflows. It was discovered by doing the work.
The pattern
Through extended AI-mediated working sessions, I began to see that successful AI collaboration depended less on clever prompting and more on the design of the working environment. The model performed better when the work was decomposed into small, verifiable moves. It performed better when each move had bounded context. It performed better when expectations were explicit, outputs were inspectable, and recurring corrections were captured as durable artifacts instead of being repeated in chat.
The methodology was built through the practice it describes.
Artifacts like move sequences, evidence packages, validator rules, reusable briefs, and gotchas files were not theoretical constructs. They came from real failures and real corrections. When the AI drifted, the answer was not simply to ask again. The answer was to improve the context system: clarify the intent, narrow the next move, remove irrelevant context, add verification, or capture a lesson so the same failure would not repeat.
External confirmation
Only later did Jake Van Clief's work provide external confirmation of the same structural insight. His writing on AI coding environments validated what I had already found independently: the workspace itself can become the orchestration layer. Context is not just prompt material. It is architecture — file structure, constraints, examples, decisions, tests, handoffs, and scripts that shape what the model can safely do.
That convergence matters. Van Clief's work was not the source of Extreme Contexting. It was a strong parallel discovery that confirmed the direction. Two paths arrived at the same core idea: as AI becomes more capable, the environment around the model matters more, not less.
Intellectual ancestor
Extreme Programming also deserves credit as an intellectual ancestor. Its emphasis on small increments, tight feedback loops, simplicity, refactoring, and verification before trust clearly echoes through Extreme Contexting.
The bottleneck shift
But the bottleneck has changed. Extreme Programming disciplined the act of writing software. Extreme Contexting disciplines the context system that AI uses to generate software, content, analysis, and other artifacts.
That shift is the heart of the method. When production becomes cheap, the scarce skill is no longer merely making the thing. It is knowing what to ask for, what to withhold, how to sequence the work, how to verify the result, and what to preserve as durable context for the next cycle.
The methodology
Extreme Contexting names that discipline.
When code is cheap, context becomes the craft — and sufficiency becomes the discipline.
Extreme Contexting is the practice of decomposing AI-mediated work into verifiable moves and giving each move the minimum sufficient context required to succeed.
It is a methodology that emerged from building with AI, through AI, under real production pressure.
Its origin is its first proof case: Extreme Contexting was developed by practicing Extreme Contexting.
Colophon
Extreme Contexting was developed by Joe Haddock through applied AI-mediated production work. The methodology draws intellectual lineage from disciplined software practice and was externally confirmed by parallel work on AI coding environments, but its formulation emerged from practice. It is practiced commercially through AscendTech.