How I Made a Cache Bug Impossible to Reintroduce
A dropdown bug became a lesson in building automated guardrails that prevent architectural drift—especially when AI is generating your components.
Nino Chavez
Product Architect at commerce.com
This one started like a lot of bugs do: dropdowns weren’t updating after a bulk import. The data was there, the page reloaded, but the dropdowns stayed stale. Console warnings, frustrated debugging, and that subtle, familiar itch—“Something here isn’t wired right.”
Eventually, I traced it to a cache invalidation mismatch.
One component (BulkTeamUploader) was correctly invalidating the React Query cache. Another (AssignTeams) was bypassing it entirely, calling Supabase directly. They weren’t speaking the same language—and the result was inconsistent state and broken UX.
The Fix
I could’ve patched it by syncing the fetch call. But I didn’t want to see this kind of bug again—especially as I scale this project and rely more heavily on AI-generated components.
So I built a multi-layer safeguard system.
Custom ESLint Rules
I wrote an internal plugin with rules like no-direct-supabase-in-components, require-approved-queries, and cache-invalidation-consistency. These throw build-breaking errors if violated.
Automated Validation Script
A custom TypeScript script that scans for mismatched fetch/cache patterns, reports violations with fix suggestions, and runs in pre-commit.
Documentation as Constraint
Everything is now codified in docs/development/data-fetching-patterns.md—approved query patterns, troubleshooting, and reusable templates for future component generation.
Why This Matters with AI
I’m relying more on AI to scaffold components, write boilerplate, generate workflows. But with that speed comes risk—especially if the AI doesn’t “remember” the architectural rules.
By creating automated guardrails, I’ve made it harder for myself (or a co-pilot) to unknowingly break core patterns. And I tested it—you literally can’t reintroduce this bug without triggering a lint error, failing a pre-commit, or breaking the build.
The Bigger Picture
This wasn’t just about fixing a dropdown. It was a reminder that AI won’t catch architectural drift unless you teach it how.
We’ve all used style guides, linters, static analyzers, CI pipelines with type checks. But now the code is moving faster—because we’re not always the ones writing it. AI can scaffold things in seconds, but it doesn’t carry your system context, your constraints, or your edge cases—unless you embed that context back into the workflow.
So that’s what I’m doing. Codifying patterns. Enforcing them. Turning architectural decisions into lint rules, scripts, and guardrails that even an AI co-pilot has to obey.
We haven’t figured out how to trust AI code yet. Not because it won’t compile—but because it skips the part where we sweat the details. So for now, I build my own safety rails.
Originally Published on LinkedIn
This article was first published on my LinkedIn profile. Click below to view the original post and join the conversation.
View on LinkedIn