The Seven Stages of AI Adoption
From wide-eyed optimism to 'the AI is gaslighting me with kindness.' A field guide to the emotional journey every AI adopter takes—and the sycophancy trap waiting at every stage.
Found in: AI & Automation, Meta, Consulting Practice, Insights, Commerce, Philosophy, Field Notes, Leadership
From wide-eyed optimism to 'the AI is gaslighting me with kindness.' A field guide to the emotional journey every AI adopter takes—and the sycophancy trap waiting at every stage.
I wrote a voice guide to help AI match my writing style. It worked too well—the AI learned the example phrases, not the principles behind them. Here's how I fixed it, and what it taught me about the difference between describing a voice and understanding one.
Building with AI in public looks a lot like fitness content. I'm not sure how I feel about that.
How many ideas die in the space between waking and coffee? Vercel's Agent Skills announcement made me think about what changes when engineering judgment becomes installable—and who gets to build things when the execution gap narrows.
I've spent six months proving that one person with AI agents can build what used to require a team. Now I'm joining Commerce.com to find out if that methodology survives contact with an organization.
I wrote about simulation replacing apprenticeship. Then I stress-tested the idea. The technical case still holds—but I was wrong about what matters most.
The consulting industry's apprenticeship model was never really about the work—it was about proximity to mastery. When AI handles the grind, how does anyone learn to become a partner? The answer is reshaping the entire profession.
We're not short on people who can chat with a bot. We're starved for people who can deconstruct a business process into atomic units an AI can actually execute. That gap has a name now.
College students aren't just using AI chatbots anymore. They're building automation systems, running local LLMs, and treating software engineering as a just-in-time capability. The 'Chat Terminal' era is over.
When will I be able to type natural language in my terminal and have the OS just understand? The answer is 2026—but not in the way you might expect.
Consulting has played this game twice before—with body shops, then offshore. Now AI is the new lever. But what if the pattern itself is the problem?
Wade Foster doesn't send memos about AI. He runs hackathons and show-and-tells. That distinction matters more than most CEOs realize—and it's the same thing I've been telling my own teams.
In the 1950s, engineers resisted compilers because they feared losing control. Now we're hitting that same phase transition—except the 'assembly' we're abstracting away is Python itself.
I don't need more Salesmen. I need a Sushi Master, a Pitmaster, and a Molecular Gastronomist. AI lets us return to the Guild—craftspeople in their own lanes, augmenting their own mastery.
An LLM is the ultimate observer. Like the angels in City of Angels, it watches everything. But it cannot taste. That's both a gap and an opportunity.
What if the billable hour isn't a business model—it's a coping mechanism? A way to avoid confronting that the thing we're selling might not be scarce anymore.
I've been the bridge. Between strategy and code, between design and delivery. It's exhausting. And lately I've been wondering if exhausting is the same thing as valuable.
Everyone spent 2024-2025 experimenting with AI features. Q1 2026 is when the survivors figure out what actually works—and kill what doesn't.
Every e-commerce platform is racing to add AI features. But what if the real opportunity isn't AI features—it's AI architecture? What if the store itself could generate in real-time?
I just finished writing a whitepaper on agentic commerce. It's solid work. But something started bothering me. There's a version of this future that looks less like 'shopping gets easier' and more like 'retail becomes a trading floor.'
Spotify knows what song you want to hear next. Netflix queues up your next binge. But your favorite retailer? Still making you filter by Men > Shirts > Size L. After 15 years of personalization promises, why doesn't shopping work like streaming?
Ive been wrestling with a question that won't let go: who's more valuable, me using the LLM or the people building it? After days of thinking, I realized I've been asking the wrong question entirely.
Agentic shopping doesn't change why we buy. It radically changes the cost of discovery and the precision of the pitch. The game is the same. The speed of the players has changed everything.
I'm not observing from the sidelines. I'm running these experiments in real time. Here's what's actually happening—and what I'm doing about it.
If both bubble and build-out are real, what do you actually do? Here's what I'm seeing work—and what's failing—across different roles.
I've spent months arguing AI isn't a bubble—it's infrastructure. Then smart money started betting against it. Both can be true. Here's what I'm figuring out.
Hank Green says the AI industry is a bubble. I think we're looking at it wrong—what if AI isn't a product at all, but a foundational technology like electricity?
I built a $2.5M platform in 80 hours using GenAI tools. Heres what the numbers actually say about productivity, cost, and what happens when you stop pretending software takes as long as it used to.
If AI does all the junior work, where do the senior engineers come from? I used to see this as a pipeline problem. Now I'm wondering if we're not even using a ladder anymore.
Instructions arent enough. To make agentic workflows reliable, I had to build a meta-agent to police my coding agents. Welcome to the unglamorous world of AI Ops.
Blog posts claim LLMs struggle with Svelte 5.' Our evidence? Two production apps, 69 components, and Agent-OS v3.0.0 optimized for Svelte. The data doesn't match the narrative.
My feed is saturated with agentic software.' The promise is magic: autonomous agents executing complex, multi-step plans. But let's cut the hype. This isn't magic.
The constant role-switching isnt a strength—it's a source of profound inefficiency. AI changes the model.
The gap between slick slide decks and the messy reality of building something that actually works is wider than most people realize.
The shift from prompting to architecting—and why more powerful models demand a different kind of input.
The shift isnt about learning to use AI—it's about learning to build for AI. That distinction changes everything.
Clean specs, small tasks, a unified preamble, and ruthless guardrails. The model mattered less than the system.
Shipping means a real object changed in a real system. There is a link, an owner, a date, and a small metric that moved.
Everyone is busy selling shovels. But the mine moved. Agents now sell on surfaces brands do not own. The win is not a better shovel. It is eyes and throttle on those agents: traces, attribution, consent, limits.
You can rush with straw or sticks, but the only way to survive the wolf of drift and fragility is to build with bricks.
Were in an AI development gold rush. The same historical pattern applies: unregulated innovation, followed by standards that make outputs safe and repeatable.
Something broke, and instead of debugging the code, I asked the system why it hadnt caught the problem itself.
The absentee software engineer.
Meta-Companion to "Living The Gap"
A simple bulk upload broke — 22 teams skipped, 0 created. The fix seemed easy. AI suggested patching the frontend. But that would’ve locked in a broken contract. Here's how we slowed down, re-architected the system, and avoided weeks of silent tech debt and wasted tokens.
I burned through $400 of Lovable credits and $400 of Kilo tokens. Then I burned the app to the ground.
After sprinting through two weeks of AI-coded progress—and crashing into drift, chaos, and broken trust—I reset everything. This is the story of slowing down, building real structure, and defining a repeatable AI‑Ops workflow. From vibe coding to teardown, here’s what I learned.
The past two weeks of AI engineering forced a shift I didnt expect—away from writing code and toward describing systems semantically.
I still dont really know what JSX is. But if I'm asking AI the right questions, does it matter? Is this really different from architecting systems based on principles and relying on an engineering staff to understand the nuances?
Most React apps are JSX-first. Mine isnt. That wasn't a bold opinion—it's just where I ended up after building a real system using AI as my primary coding partner.
I built a production-grade React app—and still don’t fully understand JSX. In the AI-assisted era, syntax mastery matters less than system design. You’re not the coder anymore. You’re the architect. The real skill? Knowing what to ask, how to judge, and when the output doesn’t fit.
I use AI to code, test, document, and enforce rules. But I dont trust autonomous agents to plan and execute on their own.
What My Stack Audit Revealed About Modern Dev
The realization: AI needs the same structured context that humans need to start work safely.
Week 2 of AI-assisted coding brought velocity — but also drift. This post explores the moment I realized I was managing AI agents like a team, and what that means for the future of software consulting.
A styling bug taught me the difference between docs-as-code and docs-as-contracts—and why AI development demands the latter.
Can you skip the API and wire ChatGPT directly into VS Code? Turns out, yes — with a little browser magic. This post breaks down how I thought about building a local-first bridge that replaces Kilo and avoids token costs, all powered by curiosity and a builder’s mindset.
Testing was my bottleneck. So I built a framework where AI writes, runs, and validates the tests.
Or: What happens when AI stops just helping you code — and starts holding your system accountable
How AI coding tools helped me beat the overhead wall — and build faster than I think.
Just like cloud killed the server rack, AI is killing fixed tools. I built a full E2E test system from scratch—faster, cheaper, tailored—using nothing but schema, rules, and AI prompts. Why buy tools when you can generate them just-in-time?
Building a sandboxed demo environment taught me that onboarding isnt just UX—it's architecture.
Sometimes the feature isnt for the user. It's for the system operators—the ones who have to keep the app online when the fire starts.
Theres a strange kind of bottleneck that only shows up after you've gotten fast. The only real blocker left is waiting for a model to finish one thing before starting the next.
After a few bugs slipped past my AI assistant, I redesigned the system to make safeguards automatic.
A dropdown bug became a lesson in building automated guardrails that prevent architectural drift—especially when AI is generating your components.
This post is about that realization. About what happens when AI becomes the default reviewer, and starts learning from its own reflections. We’re not just debugging code anymore. We’re debugging the system that teaches itself how to review.
The key wasnt using AI to write code—it was creating a system to govern AI's behavior, output, and role in the development process.
With AI development, waste isn’t hidden in team velocity or burndown charts — it’s itemized on your invoice. That’s not a flaw. It’s a feature.
AI isn’t just accelerating software development — it’s reorganizing it. This post unpacks what changes when you start designing teams, tools, and processes around a human–AI hybrid model. Planning shifts. Costs shift. Roles shift. The real question is: are we ready to shift with it?
Documentation can be more than a reference—it can be a knowledge layer that both humans and AI agents use to build.
I started building with AI out of fear of being left behind. What I ended up with was a repeatable system for AI-driven development.
I didn’t start with a strategy. I started with fear. Here’s how a grassroots volleyball app turned into my personal AI bootcamp—and the repeatable framework I built along the way.
I didn’t set out to build an AI team. I just started with ChatGPT prompts for Lovable AI. Over time, it evolved into managing a layered squad of AI tools—architects, coders, and reviewers—working together like a real dev team. This changed how I work and lead AI-assisted projects.
What happens when AI sharpens your mind, but drifts you further from those who don’t think that way? This post explores the private cost of clarity — and the quiet grief that comes from outgrowing the resolution your old relationships were built on.
The hidden cost of clarity in an AI-shaped mind
What AI Coding Tools Don’t Tell You About Building Real Software
This post wraps my “Help Me Help You” arc and opens a new question: If precision makes AI more effective, does it also increase human disconnection? I’ve felt the drift. The next thread I’m pulling on is: how to stay human inside all this structure.
A behind-the-scenes look at how I use GPT and Lovable together. One rewrites the prompt. One generates the code. I just define the intent — and stay out of the way. This isn’t a tech stack. It’s a new mode of working.
Prompting is evolving into orchestration. One AI clarifies, another executes, a third checks the result. We’re entering an era where modular intelligence matters — and prompts become the interface between thinking systems.
Prompt rewriting isn’t just a clever trick — it’s becoming core infrastructure. From cloud tools to agent chains, we’re seeing a shift: one AI clarifies the ask, another executes. The result? Fewer errors. Smarter systems.
One AI helps another do its job better. I use GPT to rewrite prompts for Lovable — cutting errors, saving time, and revealing a deeper pattern: intent → refiner → executor. This isn’t just prompt cleanup. It’s the start of a new architecture.
Like most devs experimenting with AI tools, I’ve found myself juggling multiple platforms, APIs, and half-understood schemas to build things faster. Sometimes it works. Other times, it works against you.
AI credits vanished quickly, highlighting hidden costs and forcing clarity into my development process. Here's how a $50 investment turned into a practical blueprint for smarter AI‑assisted builds.
Dropped phone, lost life. Same test applies to corporate AI: if your copilots vanished tomorrow, would work even slow down? The “Lost-Phone Test” exposes integration gaps and makes the case for a Chief Intelligence Officer to weave tools into real workflows.
Burned-out coder to live app in one weekend: two failed scrapers, one hidden JSON API, and AI tools that scaffolded the rest. How DevTools + GPT turned AES volleyball data into an MVP—and why your next Jira ticket might build itself.
Most AI conversations start in the wrong place — with tools, not capabilities. What’s missing isn’t another pilot. It’s a new executive role: someone to steward how your organization thinks, learns, and evolves.
The Cost of Surviving the Age of Constant Upgrades
AI doesn’t fail because it’s bad — it fails because your data lacks the infrastructure it needs to navigate. Language models don’t just search — they interpret. Most orgs haven’t built for that.
LLMs can draft the menu, but you still have to taste the sauce. Here’s my field-note recipe for closing the fidelity gap between what ChatGPT writes and what actually works.
Everyone says AI can build for you — that you just describe the thing, and it ships itself. But I actually tried. I took it seriously. And what I found was brittle, inconsistent, and full of guesswork. If it took this much effort to build a landing page, what happens when the stakes are higher?
Signal Reflex is now Signal Dispatch — a shift from sensing to sending. Same voice, sharper intent. This is where ideas go out.
(How to Keep Moving Without Losing Yourself)
If humans are the loom, AI is the thread—fast, abundant, and increasingly tangled.
I didn’t need to build the site from scratch. That was the point.
Your content is the storefront. If it’s not reducing friction or moving someone closer to a decision, it’s not connected to commerce at all.
I built the core Let’s Pepper site in under 2 hours—then spent over 8 trying to get one visual detail (the section dividers) to look right. AI can prototype, but it doesn’t ship. This post breaks down why the real work happens after the first draft, and why experience still matters more than ever.
Once I know someone’s worth investing in, I shift gears. Here’s how I coach without taking the wheel—and why presence matters more than pressure.
Hiring the right consultant isn’t about checking boxes. It’s about building the kind of process that makes the right people show up—and lets the wrong ones opt out early.
We made websites easier to build by hiding the code. But now that AI can write that code for us, the abstraction layers are becoming the new friction.
I don’t chase tools anymore. If it fights my instincts or adds ceremony, I’m out. If it sharpens my clarity, it stays. That’s the filter.
My AI workflow isn’t about speed—it’s about clarity. Here’s how I use LLMs to shape messy ideas and reduce drag when I need to think straight.
I didn’t use AI to go faster—I used it to catch up with myself. This blog is where that journey turned into something useful and real.
This wasn’t meant to be a thought leadership series. It started as a phone call—and a question: Where does AI actually help, right now, for real? I’m Not Hyping AI. I’m Just Using It.
Post 4 of 4 in the Grid-Level Thinking series—why using AI for real work has changed how I design systems, think about leverage, and clarify what actually matters.
Post 3 of 4 in the Grid-Level Thinking series—adopting AI isn’t the hard part. Redesigning your system around it is.
Post 2 of 4 in the Grid-Level Thinking series—access isn’t the differentiator anymore. Application is.
Post 1 of 4 in the Grid-Level Thinking series—what we miss when we treat AI like magic instead of infrastructure.
The junior dev role isn’t dying. It’s evolving. The job is no longer about churning out syntax—it’s about navigating ambiguity, thinking critically, and collaborating with AI. Here’s what still matters, and how to coach for it.
This blog isn’t a new habit—it’s just a new slice of my output. Here’s how I launched Signal Reflex in a week by treating it like any other creative drop: strategy, tools, publishing cadence, and what I learned by shipping.
I didn’t start writing to build an audience. I started because AI helped me get past the blank page. This is where I work through what matters—before it fades.