Swipe to navigate
AI Lunch & Learn - February 2026

Beyond Chat

Building Agentic Workflows with Claude Code

Most AI usage stops at “ask a question, get an answer.” What happens when you treat the AI as a team member with persistent context, codified judgment, and operational autonomy?

A walkthrough of a real production system — not a demo

The Problem

Every Session Starts at Zero

The default AI experience:
1

You open a new session. The AI knows nothing about your project.

2

You spend 10 minutes re-explaining conventions, file locations, and preferences.

3

It generates something generic. You fix it manually. Repeat tomorrow.

The real cost isn’t the AI subscription. It’s the re-onboarding tax you pay every single session — and the drift that happens when context is lost.

The Shift

Chat vs. Agentic: What’s the Difference?

DimensionChat PatternAgentic Pattern
ContextStarts from zero each sessionLoads project config automatically
KnowledgeGeneric best practicesYour conventions, your patterns
WorkflowSingle Q&A exchangeMulti-step with tools and side effects
Quality ControlYou review everything manuallyCodified standards enforce themselves
TrustVerify every outputPermission envelope grows over time

The key insight: An agentic workflow isn’t smarter AI. It’s the same AI with better infrastructure around it — memory, judgment, tools, and trust boundaries.

The Architecture

Five Layers of an Agentic System

Each layer solves a specific problem. Together, they compound.

1
Session Bootstrap

Project memory that loads automatically

2
Codified Judgment

Quality standards the AI enforces

3
Multi-Stage Pipelines

Chain models by their strengths

4
Trust Boundaries

Permissions that grow incrementally

5
Multiple Pathways

Same system, different entry points

None of this requires a custom framework. It’s configuration files, markdown documents, shell scripts, and CI/CD. The tools already exist — the insight is in how you compose them.

Layer 1

Session Bootstrap: CLAUDE.md

A single markdown file that loads at the start of every session. The AI arrives pre-configured.

What goes in it

  • Directory map with purpose annotations

  • Complete workflow templates (step-by-step)

  • Frontmatter schemas and conventions

  • Off-limits files requiring explicit approval

  • Common commands and build steps

Example: Directory Map
project/
├── astro-build/
│   ├── src/content/blog/
│   ├── public/images/
│   └── scripts/
├── docs/          # Voice guide
└── .claude/       # This config

Think of it as onboarding docs for a team member who has perfect recall but joins fresh every morning. Invest 30 minutes once — it pays off in every session.

Layer 1 (cont.)

Slash Commands: Mode-Switching

Custom commands that activate specific operational modes. Same AI, different posture.

/write-post

New Content Mode

Reads the voice guide first, then walks through: identify the hook, choose a structure template, draft following patterns, create frontmatter, self-review against checklist.

/edit-post

Refinement Mode

Reviews against voice authenticity, structural integrity, and tonal balance. Critically: includes explicit “What NOT to Fix” instructions — preserve rough edges, don’t add polish.

/review-voice

Audit Mode

Scores on three dimensions (1-10 each): Voice Authenticity, Structural Patterns, Tonal Consistency. Flags specific red-flag phrases.

/voice-check

Quick Screen

Five yes/no questions. Returns a binary: SOUNDS LIKE NINO: Yes/No/Mostly. Takes 30 seconds. Used for fast iteration.

Each command is a short markdown file — 15-30 lines. The power isn’t in complexity. It’s in giving the AI a clear role to inhabit.

Layer 2

Codified Judgment: The Voice Guide

A 156-post empirical analysis turned into a living editorial standard. Not aspirational — descriptive.

What it captures

  • Voice dimensions: Public practice, meta-awareness, vulnerable competence, pattern recognition
  • Structure templates: Reflection, Technical Deep-Dive, Leadership Insight, Origin Story
  • Anti-patterns: Retired phrases that became “tells” through overuse
  • Formatting rules: Paragraph length, divider usage, component limits

The Cardinal Rule

”This guide describes patterns, not templates. If you copy phrases from this document, you’re using it wrong.”

Every example is evidence, not instruction. The AI should understand the spirit, then find its own words.

The “What NOT to Fix” list is the most valuable part. Any AI can add polish. The hard part is teaching it to leave the rough edges that make writing feel human.

Layer 2 (cont.)

Freshness Tracking: Preventing Staleness

Even authentic phrases become hollow through repetition. The guide tracks recency and enforces cooldowns.

Pattern / PhraseLast UsedCool Until
”Here’s where my head is”Jan 21Feb 21
Opening with “How many…” questionJan 21Feb 21
Graveyard/death metaphorJan 21Mar 21
Graduation Rule

Any phrase appearing 3+ times in six months gets permanently retired. No exceptions.

Permanently Retired

”Ask me again in six months” — “Here’s where I’ve landed, for now” — “This is what I think today”

Layer 3

Multi-Stage Pipelines: Image Generation

One model thinks. Another model draws. Anti-cliche guardrails sit between them.

Stage 1: Concept

Gemini Flash

Reads the full post. Generates a specific, non-generic visual concept.

Prompt explicitly rejects: “no robots, lightbulbs, handshakes, puzzle pieces, or generic tech imagery”

Stage 2: Render

GPT-5 Image

Takes the concept, applies category-specific style profile. Generates 1200x675 illustration.

10 visual styles mapped to content categories

Stage 3: Process

Sharp + Frontmatter

Compresses to WebP at 85% quality. Auto-updates the post’s frontmatter with the image path.

No manual steps. Post references the image automatically.

The principle: Chain models by their strengths, not by vendor. One model for reasoning, another for generation, with your quality standards as the glue between them.

Layer 3 (cont.)

Category-Specific Visual Identity

Each content category has a distinct illustration style. The AI doesn’t choose — the system assigns.

AI & Automation

Electric cyan circuit patterns, flowing data streams

Reflections

Golden amber, journal marginalia aesthetic

Leadership

Architectural linework — lighthouses, bridges

Systems Thinking

Organic network diagrams, mycelium patterns

Meta

Escher-like impossible geometries

Consulting Practice

Whiteboard energy, sketch-in-progress

Visual consistency at scale. Every post gets a unique illustration, but they all feel like they belong to the same publication. The AI handles uniqueness; the system handles coherence.

Layer 4

Trust Boundaries: The Permission Envelope

Autonomy isn’t granted wholesale. It’s earned incrementally through accumulated permissions.

Approved (170+ permissions)

  • Git operations (add, commit, push, branch)

  • npm build pipeline (dev, build, preview)

  • Node automation scripts

  • Browser automation (Playwright, DevTools)

  • WebFetch for curated domains only

Off-Limits (Explicit Approval)

  • .env files (secrets, API keys)

  • Build configuration (astro.config.mjs)

  • Unapproved external domains

  • Destructive git operations

The permission list is a living document that grows through use. Each approval says: “I trust you to do this without asking.” Over time, the agent operates faster because the trust is pre-established.

Layer 5

Multiple Pathways: Same System, Four Entry Points

The same content system, the same quality standards — accessible from anywhere.

Pathway 1: Local (Primary)

Claude Code Session

CLAUDE.md loads → slash commands activate modes → voice guide enforces quality → scripts generate images → git push deploys to Vercel

Pathway 2: Mobile

GitHub Issue Form

Structured issue template → GitHub Actions triggers → Claude API formats MDX → auto-commits → Mailgun cross-posts to Substack

Pathway 3: Web

Draft Preview UI

View draft on site → click “Generate Image” → API fires repository dispatch → CI generates image → Vercel redeploys

Pathway 4: Browser

Visual Audit

Playwright/Chrome DevTools MCP → inspect live site at breakpoints → verify rendered output → audit visual design

The voice guide is the connective tissue. It doesn’t matter which pathway triggers content creation — the same quality standards apply regardless of entry point.

The GitHub Issues Pipeline

Publish From Your Phone

A structured issue form feeds directly into a CI/CD pipeline. No local dev environment needed.

Issue Form Fields
title: Post title
category: Dropdown selection
tags: From canonical list
featured: Boolean toggle
source: Where content came from
content: Raw text or markdown
What Happens Next
1GitHub Actions triggers on issue label
2Claude Sonnet API formats content as MDX
3MDX file committed to repo with frontmatter
4Mailgun emails content to Substack
5Issue auto-closes with confirmation

The insight: Content from any source — a voice note transcribed by Gemini, a draft from your phone’s notes app, an idea at 2am — can enter the same quality pipeline.

The Content Graph

Six Content Types, One Provenance System

Every piece of content can trace its lineage. Research notes link to posts. Posts link to whitepapers. Counterpoints challenge posts.

Blog Posts

Conversational exploration. MDX components. Voice-guide enforced.

Whitepapers

Formal analysis. Data tables. Plain markdown only.

Presentations

Slide-based delivery. Exportable as standalone HTML.

Counterpoints

Adversarial critiques of existing posts. AI-generated or self-authored.

Research Notes

Working documents. Red-team analyses, lit reviews, methodology.

Series

Groups related posts into multi-part arcs.

Bidirectional linking: Posts declare supportedBy (which research backs this?). Research notes declare supportsContent (which posts use this?). The graph is navigable in both directions.

Lessons Learned

What Actually Worked

High ROI

  • CLAUDE.md — 30 minutes to write, saves 10 min per session forever
  • “What NOT to Fix” — The single most impactful editorial instruction
  • Slash commands — 15 lines of markdown creates a mode the AI inhabits
  • Freshness tracking — Prevents the staleness that makes AI writing feel templated

Honest Surprises

  • Permissions accumulate fast — 170+ entries. Needs periodic review.
  • Voice guide maintenance — It’s a living document, not write-once. Requires updating after every ~5 posts.
  • Multi-model pipelines are fragile — When one API changes, the whole chain breaks.
  • The counterpoints system — AI critiquing its own output is genuinely useful, not gimmick.
Takeaways

What You Can Do Monday Morning

You don’t need the full system. Start with Layer 1 and build up.

1

Write a CLAUDE.md (or equivalent) for your project

Directory map, conventions, workflow steps. Takes 30 minutes. Pays off immediately.

2

Create one slash command for your most repetitive task

Code review, PR description, ticket grooming — whatever you do weekly. 15 lines of markdown.

3

Document what the AI should NOT do

The most valuable instruction isn’t “do this.” It’s “don’t touch that.” Preserve what’s already working.

4

Let the permission list grow organically

Don’t pre-approve everything. Let the AI ask, approve what makes sense, and the trust boundary forms naturally.

The Bigger Picture

The gap between “using AI” and “working with AI” is infrastructure, not intelligence. The models are already capable. The question is whether your systems let them remember, enforce standards, use tools, and operate within trust boundaries.

5

Config files that form the system

0

Custom frameworks required

156

Posts through this system

Signal Dispatch | February 2026