Back to all posts
Four Questions That Expose Fake GenAI Expertise
AI & Automation 3 min read

Four Questions That Expose Fake GenAI Expertise

The gap between slick slide decks and the messy reality of building something that actually works is wider than most people realize.

NC

Nino Chavez

Product Architect at commerce.com

The GenAI world is awash with “experts.” They’re everywhere: shouting about “paradigm shifts,” “synergistic AI transformations,” and the inevitable disruption of everything.

But for every enthusiastic pronouncement, I see a fundamental gap. A chasm between the slick slide decks and the messy reality of building something that actually works, provides measurable value, and doesn’t secretly hemorrhage cash.

It’s time to call out the “Markitechts”—those who expertly weave narratives but can’t draw an architecture diagram that holds water, or explain a true operational cost. They’re selling a magic show, not an engineering solution.

Raising the Floor

This is about raising the floor for everyone. Moving past the superficial “tell” and demanding the gritty “show.” Because a high floor—a universally understood foundation of practical, operational knowledge—is far more resilient than a few isolated high-flyers.

If you’re truly building with GenAI, if you’re truly leading a team or investing in this space, stop asking what GenAI can do. Start asking how.

The Litmus Test

Ask your GenAI “expert” these questions. If they flounder, if they pivot to buzzwords, or if they can’t provide specifics, you’ve found your Markitecht.

The pipeline question: “Walk me through the data pipeline for your RAG implementation. Specifically, what were the non-obvious trade-offs you made in your chunking strategy, and why did you make them that way?”

The cost question: “Beyond initial training/inference, what’s your granular formula for estimating the total operational cost of this agentic system over a year, including re-training, human-in-the-loop, and obscure cloud egress fees?”

The value question: “How are you measuring model performance quantitatively against a specific business objective, beyond a generic benchmark? Show me your framework for qualitative evaluation too. How do you know it’s not just hallucinating elegantly?”

The failure question: “Describe a significant time this specific GenAI approach failed in production. What was the root cause, and how did you architect the system differently to account for that failure mode today?”

If they can answer these with precision, with details on real-world constraints, compromises, and lessons learned, then you’ve found someone genuinely building. If not, you’re likely listening to the digital equivalent of a snake-oil salesman.

What This Means

The future of AI in the enterprise depends not on magical thinking, but on rigorous engineering, thoughtful architecture, and a deep understanding of operational realities.

I’m less interested in stopping the showmen than in knowing how to recognize the engineers. These four questions have been pretty reliable for me so far.

Share:

Originally Published on LinkedIn

This article was first published on my LinkedIn profile. Click below to view the original post and join the conversation.

View on LinkedIn

More in AI & Automation