The AI Analyst
We're not short on people who can chat with a bot. We're starved for people who can deconstruct a business process into atomic units an AI can actually execute. That gap has a name now.
Found in: AI & Automation, Consulting Practice, Commerce, Meta, Leadership, Reflections, Philosophy
We're not short on people who can chat with a bot. We're starved for people who can deconstruct a business process into atomic units an AI can actually execute. That gap has a name now.
I don't need more Salesmen. I need a Sushi Master, a Pitmaster, and a Molecular Gastronomist. AI lets us return to the Guild—craftspeople in their own lanes, augmenting their own mastery.
What if the billable hour isn't a business model—it's a coping mechanism? A way to avoid confronting that the thing we're selling might not be scarce anymore.
I just finished writing a whitepaper on agentic commerce. It's solid work. But something started bothering me. There's a version of this future that looks less like 'shopping gets easier' and more like 'retail becomes a trading floor.'
Agentic shopping doesn't change why we buy. It radically changes the cost of discovery and the precision of the pitch. The game is the same. The speed of the players has changed everything.
Hank Green says the AI industry is a bubble. I think we're looking at it wrong—what if AI isn't a product at all, but a foundational technology like electricity?
If AI does all the junior work, where do the senior engineers come from? I used to see this as a pipeline problem. Now I'm wondering if we're not even using a ladder anymore.
I'm not just paying for the service anymore. I'm doing the work. And somewhere between the self-checkout and the self-install kit, I stopped noticing when 'convenience' became unpaid labor.
Agentic software is powerful, but it needs guardrails. Im finding the most important work isn't coding, but architecting the systems that constrain the code.
I used to wear 'GSD' like a badge. But somewhere along the way, I realized my old mantra was creating the exact dependency I was supposed to prevent.
The market is flooded with slideware about agentic AI. But the people presenting those decks arent speaking from experience—they're parrots using old tools to describe a new world.
The constant role-switching isnt a strength—it's a source of profound inefficiency. AI changes the model.
The gap between slick slide decks and the messy reality of building something that actually works is wider than most people realize.
The shift from prompting to architecting—and why more powerful models demand a different kind of input.
The shift isnt about learning to use AI—it's about learning to build for AI. That distinction changes everything.
Clean specs, small tasks, a unified preamble, and ruthless guardrails. The model mattered less than the system.
The easy hardware wins are behind us. Progress is shifting into new territory.
Everyone is busy selling shovels. But the mine moved. Agents now sell on surfaces brands do not own. The win is not a better shovel. It is eyes and throttle on those agents: traces, attribution, consent, limits.
The shift from search-and-scroll to conversational answers isnt theoretical. It's already redistributing traffic, revenue, and power.
You can rush with straw or sticks, but the only way to survive the wolf of drift and fragility is to build with bricks.
Were in an AI development gold rush. The same historical pattern applies: unregulated innovation, followed by standards that make outputs safe and repeatable.
Ive been shipping features so fast that during a call with a friend, they asked how to use the framework—and I froze. That's observability debt.
We claimed features that didn't exist. Twice. So we blocked our own framework until we could prove we weren't full of it.
Something broke, and instead of debugging the code, I asked the system why it hadnt caught the problem itself.
Discovery has decentralized. By the time customers reach your product page, their minds are made up—or theyre gone.
The framework didnt catch the violation. A human did. Me.
Meta-Companion to "Living The Gap"
I live between presence and projection — here, but already ten steps ahead. It’s a tension of slowing down, waiting for others, holding the map while walking the same road. Leadership often means pacing yourself so we can arrive together.
After sprinting through two weeks of AI-coded progress—and crashing into drift, chaos, and broken trust—I reset everything. This is the story of slowing down, building real structure, and defining a repeatable AI‑Ops workflow. From vibe coding to teardown, here’s what I learned.
The past two weeks of AI engineering forced a shift I didnt expect—away from writing code and toward describing systems semantically.
I still dont really know what JSX is. But if I'm asking AI the right questions, does it matter? Is this really different from architecting systems based on principles and relying on an engineering staff to understand the nuances?
Most React apps are JSX-first. Mine isnt. That wasn't a bold opinion—it's just where I ended up after building a real system using AI as my primary coding partner.
I use AI to code, test, document, and enforce rules. But I dont trust autonomous agents to plan and execute on their own.
I re-entered full-stack development after a decade in enterprise architecture. The modern JS stack can be productive—but its also a house of mirrors held together by duct tape and package churn.
If your value lies in how you think, are you ever really off the clock? Lately, I’ve been chasing AI workflows at all hours—and thinking through systems even when I’m not at my keyboard. This post reflects on the cost of always being “on,” and how to protect the infrastructure: you.
A styling bug taught me the difference between docs-as-code and docs-as-contracts—and why AI development demands the latter.
Can you skip the API and wire ChatGPT directly into VS Code? Turns out, yes — with a little browser magic. This post breaks down how I thought about building a local-first bridge that replaces Kilo and avoids token costs, all powered by curiosity and a builder’s mindset.
Or: What happens when AI stops just helping you code — and starts holding your system accountable
How AI coding tools helped me beat the overhead wall — and build faster than I think.
Building a sandboxed demo environment taught me that onboarding isnt just UX—it's architecture.
Sometimes the feature isnt for the user. It's for the system operators—the ones who have to keep the app online when the fire starts.
Theres a strange kind of bottleneck that only shows up after you've gotten fast. The only real blocker left is waiting for a model to finish one thing before starting the next.
After a few bugs slipped past my AI assistant, I redesigned the system to make safeguards automatic.
A dropdown bug became a lesson in building automated guardrails that prevent architectural drift—especially when AI is generating your components.
The key wasnt using AI to write code—it was creating a system to govern AI's behavior, output, and role in the development process.
With AI development, waste isn’t hidden in team velocity or burndown charts — it’s itemized on your invoice. That’s not a flaw. It’s a feature.
AI isn’t just accelerating software development — it’s reorganizing it. This post unpacks what changes when you start designing teams, tools, and processes around a human–AI hybrid model. Planning shifts. Costs shift. Roles shift. The real question is: are we ready to shift with it?
Documentation can be more than a reference—it can be a knowledge layer that both humans and AI agents use to build.
I didn’t start with a strategy. I started with fear. Here’s how a grassroots volleyball app turned into my personal AI bootcamp—and the repeatable framework I built along the way.
I didn’t set out to build an AI team. I just started with ChatGPT prompts for Lovable AI. Over time, it evolved into managing a layered squad of AI tools—architects, coders, and reviewers—working together like a real dev team. This changed how I work and lead AI-assisted projects.
What happens when AI sharpens your mind, but drifts you further from those who don’t think that way? This post explores the private cost of clarity — and the quiet grief that comes from outgrowing the resolution your old relationships were built on.
This post wraps my “Help Me Help You” arc and opens a new question: If precision makes AI more effective, does it also increase human disconnection? I’ve felt the drift. The next thread I’m pulling on is: how to stay human inside all this structure.
A behind-the-scenes look at how I use GPT and Lovable together. One rewrites the prompt. One generates the code. I just define the intent — and stay out of the way. This isn’t a tech stack. It’s a new mode of working.
Prompting is evolving into orchestration. One AI clarifies, another executes, a third checks the result. We’re entering an era where modular intelligence matters — and prompts become the interface between thinking systems.
Like most devs experimenting with AI tools, I’ve found myself juggling multiple platforms, APIs, and half-understood schemas to build things faster. Sometimes it works. Other times, it works against you.
Dropped phone, lost life. Same test applies to corporate AI: if your copilots vanished tomorrow, would work even slow down? The “Lost-Phone Test” exposes integration gaps and makes the case for a Chief Intelligence Officer to weave tools into real workflows.
Most AI conversations start in the wrong place — with tools, not capabilities. What’s missing isn’t another pilot. It’s a new executive role: someone to steward how your organization thinks, learns, and evolves.
The Cost of Surviving the Age of Constant Upgrades
AI doesn’t fail because it’s bad — it fails because your data lacks the infrastructure it needs to navigate. Language models don’t just search — they interpret. Most orgs haven’t built for that.
Most MVPs aren’t minimum or viable — they’re just premature.
Even when I’m not trying to posture, sometimes it feels like the platform does it for me. This is about the moment when sharing something honest starts to feel like a performance — and how I’m trying to stay grounded in signal, not spectacle.
What if you're not here to capture the moment—but to be the aperture it passes through?
If humans are the loom, AI is the thread—fast, abundant, and increasingly tangled.
You don’t have to be the thread. Or the pattern. Just be the thing that lets it all come together.
A shiny idea isn't strategy. This post digs into decision hygiene: the discipline of thinking beyond the spotlight, and seeing the systems, ownership, and scope your choices actually live inside.
I’ve been writing and thinking so much about how I think, it’s started to shape my real-life conversations — sometimes in ways that feel disconnected. When does thoughtful reflection cross the line into sermonizing? And how do we find balance between clarity and presence?
I hold up the mirror for others all the time—clients, teammates, athletes. I just can’t seem to look in it myself. This post explores what it means to help others see their potential while still wrestling with your own.
The complexity isn’t in the tech—it’s in the noise. This final post in the Commerce Drift arc explores why reading the signal is the real skill behind every good commerce strategy.
I’ve heard people describe me in ways I barely recognize. At first, it felt like they were talking about someone else. But now I’m wondering—what if they’re seeing something I haven’t figured out how to see in myself?
This post was already about self-doubt. So writing it with an AI didn’t make it easier—it made the mirror sharper.
Sometimes I’m not writing for clarity. I’m writing to defend myself against a comment that hasn’t been written yet.
We romanticize personalization. But sometimes the problem isn’t lack of choice—it’s too much of it. Especially when we’re not sure what we really want.
Composable promised freedom from the monolith. But in chasing modular flexibility, are we just assembling the same storefront with different colors and calling it strategy?
Getting sharper comes at a cost. The more refined your thinking becomes, the more you risk drifting into isolation. This post explores the hidden tax of clarity—and what it means to stay reachable without dumbing yourself down.
I didn’t set out to write about personal growth. I just wanted to get clearer. But writing about other things helped me finally put into words a shift I’ve been feeling for years—that sometimes growth changes how you see the world before you even realize it.
As your work gets more refined, it risks losing the texture that made it real. But if you do it right, refinement doesn’t erase your voice—it reveals it.
Your writing gets sharper. Your thinking gets clearer. Your tone gets cleaner. But somewhere along the way, you wonder if the people who liked the messy version of you still recognize the voice.
If customers don’t convert on your site, where do they decide? Trust now starts upstream—off-site, in content, and with zero friction.
When I started this blog, someone close to me asked, “What are you doing with this?” Not my photography. Not my DJ mixes. Just this. I’ve been thinking about why.
Coaching club volleyball taught me more about leadership than any workshop ever could. From managing expectations to building trust, this post breaks down the surprising overlaps between the gym and the boardroom.
This wasn’t meant to be a thought leadership series. It started as a phone call—and a question: Where does AI actually help, right now, for real? I’m Not Hyping AI. I’m Just Using It.
Post 4 of 4 in the Grid-Level Thinking series—why using AI for real work has changed how I design systems, think about leverage, and clarify what actually matters.
Post 3 of 4 in the Grid-Level Thinking series—adopting AI isn’t the hard part. Redesigning your system around it is.
Post 2 of 4 in the Grid-Level Thinking series—access isn’t the differentiator anymore. Application is.
Post 1 of 4 in the Grid-Level Thinking series—what we miss when we treat AI like magic instead of infrastructure.
The junior dev role isn’t dying. It’s evolving. The job is no longer about churning out syntax—it’s about navigating ambiguity, thinking critically, and collaborating with AI. Here’s what still matters, and how to coach for it.
This blog isn’t a new habit—it’s just a new slice of my output. Here’s how I launched Signal Reflex in a week by treating it like any other creative drop: strategy, tools, publishing cadence, and what I learned by shipping.
Most customers already know what they want. They’re not browsing. The storefront isn’t where you win anymore—it’s just where you fulfill.