Swipe to navigate
Signal Dispatch - March 2026

Anatomy of AI-Native Development

What 223 Sessions and 38,000 Prompts Actually Look Like

Not a demo. Not a tutorial. A data-driven autopsy of one month of real AI-assisted work.

Based on 30 days of Claude Code session data — Feb 19 to Mar 19, 2026

The Raw Numbers

30 Days at a Glance

223
Sessions
38,322
Prompts sent
32
Distinct projects
1.2 GB
Conversation data

Where did the data come from? Claude Code writes .jsonl log files for every session. I wrote a script to parse them — extracting timestamps, turn counts, project directories, and first messages.

Session Shape

Not All Sessions Are Created Equal

TypePromptsSessionsShare
Quick1–10146%
Medium11–504721%
Deep51–1509342%
Marathon151–5005424%
Ultra501+146%

The sweet spot is 51–150 prompts. Long enough to build momentum. Short enough to stay focused. But the 30% in Marathon/Ultra is where the biggest artifacts get built — entire features, full client proposals, complete site migrations.

Work Distribution

It’s Not Just Coding

AI Analytics
30%
Cross-cutting
21%
Sports Tech
16%
Enterprise
9%
Photography
9%
Client Work
5%
Blog & Content
4%
Tooling & Other
6%

21% of all prompts are “meta-work” — brainstorming, workspace audits, cross-project organization, shared tooling. The connective tissue between projects that doesn’t show up in any sprint board.

Mental Model

How Sessions Get Launched

47x
”Implement the plan”
Thinking is done. Hand off execution.
49x
Resume session
Pick up where the last conversation left off.
15x
Brainstorm
”Consider this idea…” or “What do you think about…”

The dominant mode is directed execution — not exploration. The human does the thinking, the AI does the building. But the AI has enough context to make judgment calls without constant hand-holding.

The Environment

What Makes This Possible

1
CLAUDE.md — Project-Level Instructions
Every project has a persistent config file. Stack, conventions, deployment targets, off-limits files. Claude reads this on session start. No re-explaining.
2
Memory System — Cross-Session Persistence
User preferences, past decisions, feedback corrections. File-based memory that survives across conversations. Claude knows who you are without being told.
3
Skills & Hooks — Automated Behaviors
Edit a TSX file? React checker runs. Start a dev server? Browser verification launches. Write a blog post? Voice guide loads automatically.
4
Project-Scoped Conversations
One conversation per project per work stream. Each session inherits the project’s context automatically. No mixing, no context pollution.
Cross-Pollination

The Multi-Tool Workflow

Claude Code isn’t a silo. It’s the execution layer for ideas that originate everywhere.

Inputs
  • Gemini deep research reports
  • Slack conversations with colleagues
  • Screenshots of deployed sites
  • Client feedback emails
  • Competitor site analysis
  • Output from other Claude sessions
Outputs
  • Deployed applications
  • Client proposals and plans
  • Blog posts and presentations
  • Design mockups and prototypes
  • Database migrations
  • Git commits pushed to production

The tool that wins isn’t the one that generates the best ideas. It’s the one that can take input from anywhere and turn it into shipped work.

The Real Projects

What Got Built in 30 Days

QuantifAI
AI analytics platform — enterprise fork + public lite version
47 sessions
Rally HQ
Tournament platform — Swiss format, beta users, UX audit
19 sessions
Photography Gallery
20,000+ photo migration from SmugMug to Cloudflare Images
27 sessions
Client Sites
4 clients — redesigns, MVPs, DNS migrations, proposals
11 sessions
Signal Dispatch
Blog posts, whitepapers, presentations — including this one
17 sessions
Key Insight

The Onboarding Metaphor

I stopped treating AI like a tool and started treating it like a team member who needs onboarding.

Traditional Team
  • Onboarding docs → CLAUDE.md
  • Tribal knowledge → Memory system
  • SOPs → Skills & hooks
  • Stand-ups → Project-scoped sessions
  • Code review → Automated verification
Why This Works
  • Context is persistent, not re-explained
  • Conventions are enforced, not suggested
  • Quality is automated, not manual
  • Delegation is trusted, not micromanaged
  • Output is consistent, not variable

The 223 Sessions Aren’t the Impressive Part

The impressive part is that most of them didn’t require explaining the same thing twice.

Companion blog post: ninochavez.co/blog/what-223-sessions-taught-me-about-working-with-ai

Tutorial: ninochavez.co/blog/setting-up-an-ai-native-dev-environment

Signal Dispatch — ninochavez.co/blog