← Back to blog

I Develop 20 Features at Once

I Develop 20 Features at Once

Cover

In the previous article, we used Context Engineering to systematically make the determination of tasks transparent. By establishing a safe checkpoint (Human Gate), the execution workflow is strictly prevented from inducing architectural risks before it ever touches the live codebase.

Building on that foundation, we can reach an expanded level of operation: the ability to execute parallel codebase deployments.

When future system changes are mapped transparently, developers no longer have to process feature logic sequentially or blindly. Implementation can unleash performance across the entire project simultaneously. Imagine development as possessing multiple blueprints with confirmed directions, breaking them down into small streams and planning completely independently of each other.

The core of this mindset focuses on the standardization of change workflows — technically referred to as Specs. Mandatory rule: every single Spec must be carved small enough to fit within a single AI session. This ensures the machine always controls the data without suffering from reasoning overload (context-length bottleneck) during a limited analysis session.

Specifying System Design

A Spec-managed system is not an ad-hoc notepad. For the automated engine to recognize a design, every newly generated Spec stream must immediately pass through the Complexity Triage station, which assigns labels ranging from Trivial and Small to Standard.

Many Trivial changes will be automatically pruned from the Design stage to optimize resources. However, strictly speaking, if the Escalation Flags catch actions like adding a new library (new-dependency), migrating data structures (data-migration), or intervening in cross-layer communication (cross-boundary) — triggering and mapping out the architecture via a design.md file becomes mandatory.

These Design documents are structured as follows:

  • Gap Analysis: What system gap does the implementation fill?
  • Architecture Decisions: What is the final architectural approach and why were alternatives rejected?
  • Risk Map: Defensive strategy against side-effects.

For changes carrying a Medium risk factor or higher, writing pure text structures is forbidden. The developer forces the AI to compile the entire logic via data flow diagrams (Mermaid Diagram). The system mandates that architectural complexity must be visibly exposed for the developer (Reviewer) to read.

Chopping Down Tasks – The Core of Execution

Downstream of the Architecture Decisions, the system generates a massive batch of highly coupled micro Tasks (an execution-ordered checklist). The worklist is locked in priority sequence: downstream tasks are hard-linked, waiting for upstream tasks to finish perfectly before they are allowed to boot. The code-gen phase of cf-build (the engineering Agent) reads purely sequentially through this list to seamlessly bridge all logic grooves.

The resolution scale of each Task is hard-capped: The code deployment must be comprehensively finished within 50 to 200 lines per AI session. Any Task group that touches 5 or more legacy system files immediately triggers a red alert and must be sliced into sequential phases to eradicate the Epic Task trap.

The structural governance of each Task relies heavily on four pillars:

  • Dependencies (Deps): Declarations indicating the workflow activation point.
  • Approach: The analysis Agent translates the architecture into static paths. (E.g., "Reuse the model at auth/handler.ts:45"). This Bridging Context step ensures the code-gen AI has an immediate reference point without needing to reverse-engineer code data for clues.
  • Tests: Core logic updates must be paired with Unit Tests. Revamped UI workflows must be bridged via E2E. The n/a (No tests) structure is strictly locked down (solely for editing text/translations).
  • Done Criteria: Programming completion metrics must be explicitly measurable.

Validation — Oracle Reviewing the Plan

The automated review engine is known as cf-oracle — an independent Agent absolutely isolated from the upper planning processes. This Agent's sole design function is finding logic holes:

  • Do the dependency calculations of Tasks trigger any cycle dependencies?
  • Has the Specs model fully covered the system-angle test scenarios?
  • Does the Design structure leave any module vulnerabilities exposed that the initial Discovery AI failed to scan?

The result output from the Oracle Agent is packaged into two groups: the must-fix (mandatory immediate adjustments) and the nice-to-fix (recommended updates) lists.

At this point, the human checkpoint exerts its technical gravity: The lead developer holds the final position in error classification (Triage). The engineer decides to Accept the structural fix or outright Reject it with an attached log reason. Auto-Accepting is strictly forbidden.

Making the Change Process Transparent

An execution chain for Specs is no longer limited to management tickets; it is elevated to the level of "Documenting the entire development intent process".

The approval of a design concept stringently demands a series of trade-offs, dissenting reviews, and deep theoretical debates between humans and the Oracle. Every modification records a timeline that fortifies the codebase's growth. This process constructs a Knowledge Graph. From this milestone, even if personnel turns over years later, reading the code rapidly yields the operational principle: Why does that code file consume so many runtime resources instead of utilizing a simplified logic flow?

Once the runway is paved with tightly coupled Specs -> Design -> Tasks data, dispatching it to the build stations (cf-build) for automated execution is guaranteed. The coding engine will ride on perfect rails, avoiding guesses or branching violations of the structural layout.

Conclusion: Opening New Possibilities

Managing a change roadmap (Planning) deeply grounded in methodology does not create redundant overhead. These strict planning text files are the fundamental standards for safely packaging and highly effectively navigating AI Agents.

With a smooth process rigorously logging Artifacts like a development diary, the project breaks conventional programming barriers: Software engineers no longer need to huddle over manual edits fixing a single module. Instead, they unleash thousands of synchronous sessions, taking charge of deploying tens or hundreds of feature streams simultaneously.

In return, the development team can ship dozens of new functional blocks running flawlessly overnight. Everything is encapsulated through a serious review mechanism. At the endpoint of the execution pipeline, the sheer speed of AI coding and hyper-volume of features are no longer concerns, because humans maintain total control of the source code.