← Back to blog

My AI No Longer Writes Smelly Code

My AI No Longer Writes Smelly Code

In the previous article, we clarified the true reason projects lose control of their source code: the overwhelming speed of AI code generation pushes reviewers into overload and leads to lax review habits. When combined with the "vibe coding" trend—blindly relying on AI to fix bugs while ignoring context-length limits and side-effect risks—the architectural foundation continuously accumulates technical debt until it completely fractures.

To solve this problem, hindering the speed of AI or forcing humans to read slower is counterproductive. Limiting AI to human capacity means rejecting its true potential.

At Cyberk, the solution to preventing architectural risks lies in the philosophy of making the entire AI workflow transparent by splitting it into two distinct steps:

  • Planning the changes.
  • Continuously reviewing and refining.

We use AI in collaboration with humans to jointly establish and agree upon a highly accurate context. Shifting the focus from Execution to Planning is the key to ensuring that every subsequent line of code written by AI achieves absolute precision.

Solving the Context-Length and Side-Effect Paradox

One of the biggest risks with AI is the context-length limit and its blind spot regarding side-effects. In a complex system, we cannot stuff the entire source code and requirements into a single prompt because it exceeds the model's processing capacity.

Our solution is Context Engineering. Instead of forcing the AI to constantly reload code or guess system designs, we introduce a dedicated discovery phase. In this phase, the AI reads the entire source code once to analyze and synthesize the system's context into a single text file.

When moving to the actual coding phase, developers only need to provide this Context file to the AI. This approach is highly effective for two main reasons:

  • Optimizing reasoning: It forces the AI, before processing logic, to carefully consider core factors such as side-effects (which modules the new feature will impact), security requirements, scalability, and code consistency.
  • Optimizing memory: The output is in pure text format. This is the format AI processes best, maximizing synthesis capability without bloating the context-length.

Making Changes Transparent

To implement Context Engineering practically, review processes are automated via three checkpoints before the system generates any code:

Checkpoint 1: Complexity Triage. New requests are scanned and scored for risk via algorithms. This eliminates subjective human estimation (such as thinking "this feature only takes 15 minutes"). If a request touches the infrastructure layer or core database, the strictest security protocols are automatically triggered.

Checkpoint 2: Discovery. AI automatically scans constraints to extract the current architecture. Developers don't have to waste time re-analyzing legacy modules, and the system is prevented from blindly writing code without understanding the actual structural map.

Checkpoint 3: Proposal. A network of Agents automatically cross-references data, checks Design Patterns, and cross-evaluates each other. The result is a complete implementation Proposal detailing the holistic approach, rather than directly generating code.

This process makes a profound difference through the "Human Gate". Developers are required to evaluate the direction and plan before the system touches the source code. AI altering features directly causes developers to lose their holistic oversight. Conversely, the Proposal outlines a clear, transparent implementation roadmap. If the initial direction is flawed, the source code remains completely isolated and safe. The task then is simply to refine the Context or Proposal without injecting any adverse side-effects into the live project.

Pushing the Review Process Further

Although AI has the capacity to review and propose, the core principle of this mindset is that decision-making and accountability must remain entirely with the human (the engineer).

The engineer must defend their submitted plan against an AI Oracle acting as a challenger. All transcripts and arguments during this process are logged as the project's journal.

Previously, workflows were often restricted to developers reviewing each other's code. Now, the system utilizes AI to review its own plans, allowing models to cross-evaluate one another. Furthermore, developers participate directly in this planning debate until the entire team reaches consensus. This intensified review layer ensures transparency from planning down to the actual source code. Whenever anomalies arise later on, the development team can effortlessly backtrack through the process logs to identify root causes and deploy rapid corrections.

Conclusion

Architecting and standardizing via AI (through Context Engineering) before writing code is not overhead. In reality, it acts as a critical risk-control mechanism.

However, a written Proposal is merely the contextual preparation. In the next article, we will delve into the execution core: the "Spec-Driven" execution. We will leverage a TDD (Test-Driven Development) mindset to establish an automated pipeline that compiles this Proposal directly into code, eliminating the need for engineers to manually debug.