The AI Analyst
We're not short on people who can chat with a bot. We're starved for people who can deconstruct a business process into atomic units an AI can actually execute. That gap has a name now.
Nino Chavez
Product Architect at commerce.com
I’ve been circling this idea for months now, and I finally found the words for it.
We’re drowning in people who can use AI. Chat with it. Paste prompts. Generate outputs.
We’re starved for people who can integrate it.
The Integration Gap
Here’s what I keep seeing in organizations: massive capital expenditure on AI tools, followed by months of… nothing much changing.
The failure point is rarely the technology itself.
It’s the lack of someone who can deconstruct a complex business process into atomic units suitable for AI execution. Someone who can translate vague business requirements—“make this faster,” “automate that workflow”—into deterministic technical specifications.
The market is flooded with employees who can chat with a bot, but starved for professionals who can design human-AI systems.
I started calling this person the “AI Analyst.”
Not because I invented the concept. Because the pattern has precedent.
The Bridge Role Pattern
Every major technology shift creates the same gap—and the same bridge role to fill it.
During the electrification era, factories needed someone between the electricians who installed the infrastructure and the managers who ran operations. The “Plant Engineer” emerged—someone who designed how power flowed through the business.
During the computing revolution, the “Systems Analyst” became the critical bridge. They translated business needs into technical specifications without writing the code themselves. They understood both worlds well enough to design what should be built.
The internet era had the “Digital Strategist.” The cloud era had the “Solutions Architect.”
Each of these roles shared a critical characteristic: they emphasized design over operation.
This pattern matters because it tells us something about skill durability.
Prompt Engineer Is the New Webmaster
Here’s the uncomfortable parallel I keep coming back to.
Remember “Webmaster”? In 1998, it was a prestigious title. By 2005, it was embarrassing. The role didn’t disappear—it split into specialized functions (developer, designer, SEO specialist) and the generalist title became a joke.
Prompt Engineering is following the same arc.
In 2023, “Prompt Engineer” sounded cutting-edge. By 2026, the models are smart enough that “magic words” matter less than structured context and data. The skill isn’t dying—it’s dissolving into something broader.
The people who built careers on “10 prompts that will blow your mind” are scrambling. The people who understood systems—how to decompose processes, when to use which model, how to govern the outputs—are getting promoted.
What an AI Analyst Actually Does
So what does the bridge role look like for AI?
Traditional software engineers focus on building products. AI Analysts focus on leveraging products and platforms to create experiences and operational efficiencies.
A software engineer writes code. An AI Analyst might never touch code—but they understand how to:
- Decompose a workflow into atomic tasks
- Calculate when AI makes economic sense (and when it doesn’t)
- Design handoff points between humans and agents
- Govern the systems they build
That last point is the one most people miss.
Notice what these skills have in common: they’re all design skills, not operation skills.
You’re not running the automation. You’re architecting how it fits the business. You’re not prompting the model. You’re designing the system that uses it.
That’s the difference between a role that lasts and one that commoditizes.
The Economics of Intelligence
One thing that separates an AI Analyst from a “power user” is understanding the P&L implications of their choices.
We’re in a weird moment where intelligence costs are fracturing. Reasoning-heavy models like OpenAI’s o1 command premium pricing—$60 per million tokens. Fast inference models like DeepSeek R1 are driving costs toward zero—$2.19 per million tokens.
That’s a 27x price difference.
A skilled analyst knows when to deploy the “smart/slow” model versus the “dumb/fast” model. They’re not just building automations—they’re managing a resource.
Most AI training courses don’t touch this. They teach prompting. They don’t teach economics.
This is exactly the pattern from previous eras. The Plant Engineer understood electrical costs. The Systems Analyst understood compute costs. The Solutions Architect understood cloud spend.
The bridge role always includes resource economics. That’s part of why it’s durable.
Design Over Operation
I keep coming back to this distinction because it’s the key to understanding which skills compound.
The Systems Analyst role lasted for decades because it was about designing how computing fit the business—not about operating the computers. The computer operators? That role commoditized and largely disappeared.
The Solutions Architect role is still going strong because it’s about designing cloud infrastructure—not about managing servers. The server admins? Largely automated away.
The AI Analyst role—if it follows the pattern—will last because it’s about designing human-AI systems. The prompt engineers? Already commoditizing.
From Workflow to Agent Orchestration
Here’s where I’m revising my own thinking.
The language of “workflow engineering” is already getting stale. We’re not building linear automations anymore—trigger, action, done. We’re orchestrating agents.
The distinction is nuanced but vital: Automation follows a pre-defined path. Agents determine their own path to achieve a goal.
Agentic AI is projected to permeate 40% of enterprise workflows by the end of 2025. Major consultancies report 88% of senior executives are increasing budgets specifically for agentic capabilities.
The AI Analyst of 2024 built Zapier workflows.
The AI Analyst of 2026 manages a “swarm”—multiple agents that pass tasks to each other, verify each other’s work, and escalate to humans only when necessary.
Same core skill—process decomposition and system design. Different surface area.
The tools change. The design thinking persists.
The Data Hygiene Problem
Here’s something I underestimated when I first sketched this curriculum: the difficulty of data cleaning.
Real-world AI projects don’t fail because the model is dumb. They fail because the data is messy.
Inconsistent formats. Missing fields. PDFs that were never designed to be parsed. Spreadsheets that encode tribal knowledge in cell colors.
An AI Analyst needs to be able to design a pipeline that uses an LLM to extract structured data, validate that data against a schema, and flag errors for human review.
This isn’t glamorous work. It’s also where most projects die.
If you can’t clean the data, you can’t automate the process. Full stop.
The Governance Opportunity
The other piece I’m doubling down on: governance.
Shadow AI is exploding. Employees are spinning up automations without IT approval, feeding proprietary data into consumer tools, building critical workflows on free-tier accounts that could disappear tomorrow.
Organizations are desperate for someone who can:
- Create an AI Acceptable Use Policy
- Build a risk assessment matrix
- Design audit trails for automated decisions
- Define escalation paths when agents fail
This isn’t sexy. It’s also the fastest path to a seat at the leadership table.
An AI Analyst who can produce a professional-grade governance framework isn’t just a tinkerer. They’re a “safe pair of hands.” That distinction matters when budgets tighten and executives get nervous about liability.
And notice: governance is a design skill. You’re designing the rules, the boundaries, the escalation paths. Not operating within them.
What This Means for Training
The training landscape right now is bifurcated.
On one end: cheap video courses that teach prompting. High volume, low transformation. Completion rates around 10-15%. They’re training operators, not designers.
On the other end: expensive bootcamps that promise career switching. $10,000+ price tags. Slow curriculum updates. By the time you graduate, the tools have changed.
The middle ground—rigorous training that teaches design thinking at an affordable price—is mostly empty.
That’s the gap I’m trying to fill with the AI Analyst Academy.
Not “learn to chat with AI.”
Not “become a machine learning engineer.”
Something in between: become the person who can design human-AI systems without writing code, but with enough technical depth to understand what’s actually happening under the hood.
The Uncomfortable Truth
Here’s what I keep landing on:
The people who are “waiting for AI to mature” before really engaging with it are making a bet that the learning curve will flatten. That they’ll be able to catch up later when it’s more stable, more predictable, more like traditional software.
I don’t think that bet pays off.
The gap between tourists and practitioners is widening precisely because the practitioners are building design intuition that compounds. Every system I build teaches me something I couldn’t learn from reading about AI. Every failure reveals constraints I wouldn’t have anticipated.
You can’t memo your way to fluency. You can’t delegate your way to understanding. You have to design something. Watch it break. Redesign it.
That’s how the Systems Analysts learned. That’s how the Solutions Architects learned. That’s how AI Analysts will learn.
Why the Name Matters
I spent a while wrestling with what to call this role.
“Business Engineer” felt too corporate, too close to software engineering. “AI Operator” sounded like someone who runs things, not designs them. “Prompt Strategist” was already dated.
“AI Analyst” stuck because it has 50 years of precedent.
The Systems Analyst bridged business and technology during the computing revolution. The role lasted because it emphasized design—translating business needs into technical specifications—rather than operation.
The AI Analyst is the same pattern, new era.
Someone has to sit between the AI tools and the business processes. Someone has to translate between what executives want and what agents can actually do. Someone has to design the governance layer so the whole thing doesn’t collapse under its own complexity.
That someone is increasingly in demand.
And the supply is thin.