Back to all posts
From Asking AI Questions to Giving It Directives
AI & Automation 2 min read

From Asking AI Questions to Giving It Directives

The shift from prompting to architecting—and why more powerful models demand a different kind of input.

NC

Nino Chavez

Product Architect at commerce.com

Something shifted when I started working with Claude Sonnet 4.5.

The way I’d been prompting—asking questions, waiting for opinions, iterating through clarifications—started to feel wrong. Not broken, exactly. Just… undersized for what the model could do.

The Old Approach

Here’s how I used to prompt:

“Analyze the config, docs, and source code. Given the new version of Claude Code and Sonnet 4.5, do we need to revise any workflows? Do we need to update the agent-os instructions? Is there anything I’m not thinking of?”

It’s a good-faith question. Reactive. Exploratory. Asking for an opinion.

What Changed

The new models can handle long, autonomous sessions. They can manage complex, multi-phase tasks. They can hold context across entire codebases. So why was I still treating them like a junior dev waiting for direction?

I tried something different. Instead of asking a question, I gave a directive:

“You are the lead architect for an autonomous development agent. Your primary objective is to redesign our workflow to leverage Sonnet 4.5’s long-context, agentic features. Your mission: perform a comprehensive systems analysis and provide a strategic plan.”

Then I defined phases. Phase 1: Read the entire codebase, revise the workflow, update instructions, architect specialized subagents. Phase 2: Evaluate risks, define oversight processes for autonomous sessions.

Not a question. A blueprint for action.

The Difference

When I ask questions, I get opinions. When I give directives, I get execution plans. The AI doesn’t wait for me to clarify—it moves.

I’m still figuring out where the boundaries are. How much autonomy is too much? When does “directive” become “abdicating judgment”? But the shift from prompting to architecting feels like the right direction.

The prompts aren’t getting simpler as the models get smarter. They’re getting more structural. Less “what do you think?” More “here’s the mission, here are the constraints, go.”

That’s the transition I’m in the middle of, anyway. Ask me again in six months.

Share:

Originally Published on LinkedIn

This article was first published on my LinkedIn profile. Click below to view the original post and join the conversation.

View on LinkedIn

More in AI & Automation