Back to all posts
The Cognitive Foundry, Part 2: What the Simulation Can't Teach
Consulting Practice 10 min read

The Cognitive Foundry, Part 2: What the Simulation Can't Teach

I wrote about simulation replacing apprenticeship. Then I stress-tested the idea. The technical case still holds—but I was wrong about what matters most.

NC

Nino Chavez

Product Architect at commerce.com

I published “The Cognitive Foundry” earlier today. The thesis: AI has severed the link between labor and learning. The grind that used to train partners is disappearing. Firms are building “corporate flight simulators” to replace it.

Then I did something I should have done first.

I Red Teamed my own argument.


The Adversarial Stance

Red Teaming isn’t just playing devil’s advocate. It’s assuming the stance of every skeptic who has reason to want you to fail—and testing whether your thesis survives them.

The client who refuses to pay for training. The partner who distrusts the simulation. The market force that punishes “paper experts” when they collapse in a crisis.

I ran the Cognitive Foundry through four vectors:

  • Pedagogical validity: Can simulations actually replace reality?
  • Economic feasibility: Who funds the non-billable hours?
  • Tacit knowledge transfer: Can “gut feel” be digitized?
  • Structural integrity: Does the organization survive this transition?

Here’s what held up. And what didn’t.


What Still Holds: The Technical Case

The technical case for simulation is strong. Possibly stronger than I originally argued.

Compression works for explicit knowledge. A junior can run through five years of rare crisis events—a data breach, a hostile CEO, a failed product launch—in a one-month boot camp. Pattern recognition accelerates. Financial modeling fluency accelerates. Problem decomposition accelerates.

Safety enables iteration. When failure doesn’t cost the firm a client relationship, people take risks they wouldn’t otherwise take. They try the unconventional approach. They learn faster from mistakes.

Measurement becomes possible. The AI Mentor can track interruption frequency, speaking pace, language mirroring, empathy markers. Things that used to be “soft feedback” become data.

None of this has changed. The Foundry is real. It’s being built.


What Breaks: The Determinism Fallacy

Here’s where I got it wrong.

I used the “flight simulator” analogy. Pilots learn in simulators; consultants can learn in them too. The problem is that aviation is a deterministic system. If the pilot executes the correct inputs, the plane recovers from the stall. Physics doesn’t have bad days.

Business is stochastic. It’s driven by human psychology, politics, and irrationality. A logically perfect argument gets rejected because of:

  • Internal office politics the consultant never saw
  • The CFO’s ego protecting a previous decision
  • A bad mood from something that happened at home
  • Sunk cost fallacy dressed up as “strategic alignment”

Current synthetic users—the AI personas juniors practice on—can mimic conversation. They struggle with irrational obstinacy. They regress toward the mean of their training data: too rational, too polite, too willing to agree with good logic.

The real CFO doesn’t agree with good logic. The real CFO sometimes says “no” for reasons they can’t articulate and wouldn’t admit if they could.

The simulation can teach you to build the perfect argument. It can’t teach you what to do when the perfect argument loses.


The Paper Pilot Problem

There’s a phenomenon in aviation called “automation dependency.” Pilots who rely too heavily on autopilot lose the “stick and rudder” skills needed to handle a crisis. When the autopilot fails, they freeze.

Consulting is creating its own version.

A junior who uses AI to generate a market sizing estimate gets the right answer without understanding the mechanics of the market. They present the number as fact, unaware of its fragility. They don’t know that the Q3 revenue figure was estimated from a footnote in a PDF, or that the competitor’s headcount came from a LinkedIn scrape that’s probably 40% wrong.

I called this “Surface Competence” in the original piece. The Red Team sharpened it: the simulation removes fear.

When a junior manually grinds through the data, they feel the friction. They notice the gaps. They develop a kind of somatic skepticism—their gut learns to twitch when something feels off.

In the simulation, there’s no true consequence for failure. No cortisol spike. No career risk. No partner standing over your shoulder asking why the numbers don’t reconcile. That fear is unpleasant—but it’s also the most potent encoding mechanism for judgment.


The Economic Hole I Missed

The original piece acknowledged the shift from billable apprenticeship to training-as-expense. But I didn’t fully confront the math.

Under the Pyramid model, training was a by-product of revenue. The junior learned while billing. The client unknowingly subsidized education through the “inefficiency” of junior labor.

In the Foundry model, the junior learns instead of billing. Every simulator hour is an hour they’re not generating revenue.

The traditional ratio was 1 partner to 6-8 juniors, with juniors billing at 80-90% utilization. The emerging Diamond model is 1 partner to 2-3 juniors (plus AI), with juniors at maybe 40% billable utilization. The rest is training.

That’s not a marginal shift. It’s a structural transformation of the business model.

Who pays for this?

Firms can’t pass it to clients—clients are already refusing to pay for junior hours now that AI handles the grunt work. Firms can absorb it—which compresses margins and eventually compensation. Or firms can push it onto the employee.

The “tuition model” is coming. Future consultants may pay for their Foundry certification before being hired. Or accept dramatically lower “resident” salaries during the synthetic apprenticeship years, the way medical residents trade compensation for training.

I don’t think most firms—or most junior candidates—have internalized this yet.


The Hallway Problem

Here’s the hardest thing to accept.

In the traditional model, tacit knowledge transferred through presence. The junior sitting in the back of the boardroom didn’t learn by speaking—they learned by watching. The partner’s body language. The timing of their silence. The micro-adjustment when the client’s tone shifted.

They learned in the taxi ride to the airport. The late-night pizza dinner after the deal fell through. The unguarded comment in the elevator.

None of this transfers through simulation.

An AI Mentor can critique a slide’s logic. It cannot critique the tone of the presentation. It cannot teach a junior that the client’s “Yes” actually meant “No” based on the tension in the room—a tension that only someone physically present could feel.

The Foundry, by design, digitizes and isolates the learning experience. It sterilizes it.


The Amended Position

So where does this leave the Cognitive Foundry?

The Red Team verdict: necessary but insufficient.

Necessary because the economics of the old model are broken. Firms can no longer bill for learning. The Pyramid is collapsing whether we like it or not.

Insufficient because the simulation can only do half the job. It accelerates technical competence. It does not build professional wisdom.

Three things have to change.


Recommendation 1: The Shadow Subsidy

Firms must reinvest the margin gains from AI efficiency into a new form of apprenticeship—but it’s not the old grind. It’s shadow time.

For every AI-augmented project, assign a Shadow Junior. Their role is not to produce slides—the AI does that. Their role is to sit in the room. Take notes on social dynamics. Debrief with the partner afterward.

“What did you notice when the CFO’s voice changed?”

“Why do you think the CEO didn’t push back on the timeline?”

“When I paused before answering that question, what signal do you think I was reading?”

This Shadow Time is non-billable. It has to be treated as a valid investment, subsidized by the higher fees the firm commands for AI-driven speed. The efficiency gains from AI create the margin for human mentorship.

The simulation teaches procedure. The shadow teaches craft.


Recommendation 2: Chaos Engineering for Talent

The Foundry simulations must not be safe.

Right now, most training simulations are designed to be winnable. They have correct answers. They reward best practices. They build confidence through success.

That’s wrong.

The simulations need Chaos Engineering—a term from software reliability, where you intentionally inject failures to build system resilience.

For talent development, this means:

  • Unwinnable scenarios where no answer is correct
  • Irrational clients who reject good logic for reasons they won’t explain
  • Data betrayal where the AI gives the wrong answer and the junior must catch it
  • Emotional volatility where the synthetic client’s mood shifts unpredictably

The goal isn’t to teach the right answer. The goal is to train emotional resilience. The ability to stay composed when the model fails. The skepticism to question outputs. The judgment to recognize when you’re in over your head.

The simulation must hurt. Otherwise it teaches the wrong lesson: that good process produces good outcomes. In consulting, that’s not always true.


Recommendation 3: The Cognitive Architect Track

Here’s the role I underemphasized.

The future isn’t just “Engagement Architects” who orchestrate human-AI systems. It’s a formally recognized Cognitive Architect career path—distinct from the traditional Analyst-to-Partner ladder.

This role specializes in:

  • Designing AI workflows for specific client problems
  • Auditing synthetic outputs for hallucinations and errors
  • Bridging the gap between technical teams and business stakeholders
  • Building the simulation scenarios themselves

This isn’t back-office support. This is a prestige track. A Cognitive Architect who builds scalable training assets—who creates the simulations that develop the next generation—should be compensated like a partner.

The firms that get this right will attract talent that currently flows to product management at tech companies. The ones that treat it as operations will lose.


Where I Land Now

The ladder is broken. The rocket ship is being built. Neither of those statements has changed.

But I was too optimistic about what the rocket ship could carry.

The Cognitive Foundry accelerates technical development. It does not replace human development. Firms that treat simulation as a full substitute for apprenticeship will produce “Paper Pilots”—technically proficient consultants who freeze when the situation exceeds their training.

The firms that figure out the hybrid—simulation for skills, shadow time for wisdom, chaos engineering for resilience—will build something new. Something faster and more humane than the old Pyramid, but also more robust than pure synthetic training.

I’m not sure anyone has cracked it yet. But the shape is becoming clearer.

The experiment is still running. I’m watching closer now.

Sources & Supporting Material

This post evolved from a formal Red Team analysis of my original Cognitive Foundry thesis.

  1. 1
    Red Team Analysis: The Cognitive Foundry and the Crisis of Competence(Primary source)

    Full adversarial audit examining pedagogical validity, economic feasibility, tacit knowledge transfer, and structural integrity

Share:

More in Consulting Practice