Back to all posts
How Structured Documentation Turned My AI Into an Engineering Team
AI & Automation 3 min read

How Structured Documentation Turned My AI Into an Engineering Team

Documentation can be more than a reference—it can be a knowledge layer that both humans and AI agents use to build.

NC

Nino Chavez

Product Architect at commerce.com

“Docs as code” has always been about treating documentation like a first-class citizen—versioning it alongside the software, keeping it close to the codebase, using automation to maintain consistency.

But something shifted when I started using AI-assisted coding. The documentation became more than a reference. It became a knowledge layer for both humans and machines.

The Experiment

When I built our Enhanced Role Management System, the implementation plan laid out a multi-week rollout: database migrations, RLS policies, front-end updates, a full user management interface. Normally, this would take multiple sprints of back-and-forth between developers, testers, and product leads.

Instead, I wrote four documents that function as living specs:

  • role-system-implementation-plan.md — Architecture, database schema, RLS policies, edge functions
  • role-system-developer-guide.md — Type definitions, permission logic, capability mapping
  • role-system-user-guide.md — Plain-language explanation for admins, players, coaches
  • user-management-interface-design.md — UI mockups, workflows, component specs

These aren’t just documentation. They’re promptable context for LLMs.

The Workflow

I started treating multiple LLMs as a virtual engineering team:

  • Planner LLM: Validates the architecture, identifies gaps, checks for edge cases
  • Builder LLM: Generates SQL migrations, React components, API endpoints based on the docs
  • Tester LLM: Cross-references the developer guide to create automated tests

The docs became the knowledge base. RLS policies generated in minutes instead of days. Admin dashboard components scaffolded directly from the interface design mockups. Unit tests auto-generated from the capability tables.

A feature scoped as 14 weeks of human development time—bootstrapped in hours. Leaving me to focus on edge cases, business logic, and review.

What I Learned

Traditional documentation is descriptive. It tells you what was built. This approach is prescriptive—it defines what to build next and how AI should build it.

The key shift: instead of handing AI a vague prompt like “build a user management page,” I pass it a design spec. The result is accurate, production-ready scaffolds on the first try. When the prompts are vague, the output is vague. When the specs are structured, the output is structured.

What I’m Still Figuring Out

The docs have to stay in sync with the code. When they drift, the AI starts generating code that doesn’t match reality. I’m experimenting with validation scripts that check doc-to-code alignment on every commit.

The more structured and complete the docs, the more effectively human and AI teams can collaborate. That part seems true. Whether this approach scales beyond solo development—I’m not sure yet. But for now, it’s working.

Share:

Originally Published on LinkedIn

This article was first published on my LinkedIn profile. Click below to view the original post and join the conversation.

View on LinkedIn

More in AI & Automation