Exogenesis / Vocabulary
Vocabulary
Canonical terms for discussing disciplined AI development with precision. Each term names a concept that recurs across field notes and the thesis.
Skipped Intent Manifestation
The act of proceeding to code generation without first articulating what should be built, why, and under what constraints. The developer prompts, the agent executes, and the gap between what was meant and what was produced is discovered only after the fact.
Why it matters
This is the root cause Exogenesis targets. Every downstream problem -- drift, assumption leakage, rework -- traces back to this skip. Addressing it means inserting a deliberate reasoning step before the agent touches code.
Assumption Leakage
When a coding agent fills gaps in underspecified instructions with plausible but undeclared assumptions, embedding decisions the developer never reviewed into the output.
Why it matters
Leaked assumptions compound silently. Each one is a latent decision that may conflict with the developer's actual intent, other parts of the codebase, or future changes. They are invisible until something breaks.
Drift
The measurable divergence between what a developer intended and what the agent produced. Drift can be functional (wrong behavior), structural (wrong architecture), or scopal (built more or less than requested).
Why it matters
Drift is the primary metric in Exogenesis experiments. Reducing drift means the agent's output more closely matches the developer's actual intent, reducing review time, rework, and latent bugs.
Reset Instability
The degradation in output quality and consistency that occurs when an agent session is interrupted and a new session must continue from partial work without the original conversation context.
Why it matters
Real development involves interruptions, context switches, and session limits. If an agent cannot resume reliably, every reset becomes a potential source of conflicting implementations and wasted effort.
Intent Artifact
A structured document produced before code generation that declares the scope, constraints, input/output contracts, and non-goals of a task. It serves as a portable contract between the developer and any agent session.
Why it matters
Intent artifacts are the mechanism Exogenesis uses to prevent skipped intent manifestation. They make implicit decisions explicit, survive session resets, and give agents a verifiable reference instead of forcing them to infer context.
Pre-execution Gate
A checkpoint inserted between the developer's request and the agent's code generation, during which intent is articulated, reviewed, and approved before any implementation begins.
Why it matters
The gate is where discipline happens. Without it, the path from prompt to code is uninterrupted and unexamined. With it, assumptions are surfaced, scope is confirmed, and the agent operates against a declared contract rather than an inferred one.
Constraint Surface Area
The total set of explicit boundaries provided to an agent for a given task: what to build, what not to build, which patterns to follow, which files to touch, and which conventions to preserve.
Why it matters
A larger constraint surface area gives the agent less room to guess. Experiments show that expanding constraints correlates directly with reduced drift, fewer broken tests, and smaller diffs.
Spec vs Prompt
The distinction between a structured intent artifact (spec) that declares constraints, scope, and contracts, versus a natural-language instruction (prompt) that describes desired behavior without explicit boundaries.
Why it matters
Prompts are necessary but insufficient. They tell the agent what to do but not what to avoid, what to preserve, or where the boundaries are. Specs close the gap. Exogenesis argues that the transition from prompt-only to spec-gated workflows is where reliability improvements live.
Protected Values
Values that must be preserved when trade-offs occur during implementation. Examples include speed, trust, transparency, traceability, simplicity, privacy, and data integrity. A protected value is not a nice-to-have preference — it is something the implementation must not optimize away silently.
Why it matters
Without explicit protected values, agents make reasonable-looking trade-offs that violate what actually matters. An agent might sacrifice data integrity for a cleaner UI, or trade transparency for a more elegant architecture. Protected values make these boundaries non-negotiable before code exists.
Trade-off Posture
An explicit declaration of what may be optimized and what must not be sacrificed during implementation. Trade-off posture separates negotiable preferences from non-negotiable constraints.
Why it matters
Agents constantly make trade-offs during implementation, but they make them silently. Without a declared posture, an agent might polish the UI at the expense of correctness, or optimize performance at the cost of auditability. Trade-off posture prevents aligned-looking output that is misaligned in substance.
Failure Boundaries
Explicit declarations of which failures are unacceptable for a given product. Examples: silent miscalculation, false compliance implication, wrong product identity, hidden data loss, misleading safety posture.
Why it matters
Failure boundaries convert abstract concern into design significance. They tell the agent not just what to build, but which failure modes must be structurally prevented. Without them, the agent treats all potential failures as equally tolerable.
Product Center of Gravity
The primary concept a product is organized around. Many prompts contain multiple plausible anchors, and without explicit structure, a coding agent may choose one concept as the real center and demote the others.
Why it matters
This is one of the most common sources of product-identity drift. A prompt for a 'todo app with markdown notes and preview' has two plausible centers of gravity. An agent that anchors on 'notes' will build a fundamentally different product than one that anchors on 'todos' — and both will look correct.
Plan Substitution
When a coding agent generates an intermediate implementation plan that replaces the original user intent, and then executes that plan consistently. The result is a working implementation that faithfully follows the wrong framing.
Why it matters
Plan substitution is the most important Exogenesis concept because it explains how working software can still be the wrong software. The agent's output is polished and internally coherent, which makes the substitution invisible until someone compares it against the original intent.
Artifact-level Alignment
The observation that coding agents can align to a structured intent artifact even without understanding the philosophy behind it. When intent is represented explicitly in a strong artifact, the artifact itself guides implementation — not the prompt history or session context.
Why it matters
This is stronger than prompt-level persuasion. Prompts are transient and session-bound. Artifacts are portable, inspectable, and can survive across sessions, agents, and regeneration. Artifact-level alignment means the intent representation does the alignment work, not the conversational framing.
Verification Emergence
The tendency of explicit intent artifacts to naturally generate downstream verification obligations — test expectations, reset semantics, persistence guarantees, rule-to-behavior mappings, checklists, and acceptance boundaries.
Why it matters
Verification does not need to be invented from scratch every time. When intent is made explicit, the structure of that intent reveals what must be tested. Without explicit intent, tests may exist but target the wrong concept, or hidden assumptions remain untested.
Regeneration
Re-materializing an implementation from the intent artifact after code loss, deletion, or replacement. If intent is preserved well enough, implementation can be rebuilt, migrated, partially regenerated, or re-evaluated across different stacks.
Why it matters
Most software practices assume the implementation is the only serious source of truth. Exogenesis rejects that assumption. Successful regeneration is evidence that intent survived beyond a single implementation — that the artifact was meaningful enough to guide recovery. This is not merely a convenience feature; it is evidence of artifact strength.
Legitimate Divergence
A difference between implementations that represents a valid design choice in an area the intent artifact did not constrain, rather than a distortion of intent. Legitimate divergence is identified when: the artifact is silent on the area, the difference does not conflict with any protected value or invariant, and the product identity is unaffected.
Why it matters
Without this concept, every difference between implementations gets classified as drift, inflating the apparent significance of intent discovery. Legitimate divergence is a healthy finding — it means the intent artifact correctly left room for implementation freedom in areas that do not affect product meaning.