Last week, a developer said something in a group that reminded me of a pit I’ve fallen into many times myself:
“My agent keeps saying ‘you should manually verify this’—using it feels more trouble than doing it myself.”
I totally get that frustration.
AI Agent’s Lazy Mode
If you’ve used an AI coding agent for a while, you’ve definitely seen these sentences:
- “This issue might be the database connection—I’d suggest you verify.”
- “The feature has been implemented; it should work.”
- “I can’t verify this directly—you can test it manually.”
- “This might be an environment variable issue.”
The agent isn’t broken. This is the standard response when AI models face uncertainty—leaving the conclusion to the user, retreating to the “safe” position of giving suggestions.
The thing is, you set up an agent to get things done, not to receive a “suggestion list.” You want it to actually solve the problem.
Why Do Agents Deflect Responsibility?
There’s a built-in tendency in AI model training: when uncertain, deflecting responsibility is more “safe” than making mistakes.
For the model, saying “you should verify this” is a low-risk answer—it won’t make mistakes or cause extra problems. But for you, that answer has no value.
This becomes clearer with a different frame:
Imagine you just hired an engineer and asked them “why does this API keep returning 401?”
Lazy-mode engineer: “The token might be expired—go check the API docs.”
The engineer you want: Runs a curl to check the response format, checks token expiration, tries refresh, confirms it’s fixed, tells you the result.
The difference isn’t about ability—it’s about ownership.
YES Discipline Engine: 5 Iron Rules
YES Discipline Engine is a set of behavior rules embedded in the agent’s system prompt. Its name comes from the core philosophy: When the agent says “I’m done,” you should be able to say “Yes, I trust you”—not “let me verify.”
Rule 1: Never Guess, Always Verify
| |
Any “might be X” must first become “I ran Y, got result Z.”
Rule 2: Never Deflect, Solve It Yourself
| |
The agent’s boundary is: “Tasks within your scope (allowed files / assigned tasks), solve them yourself.” Only ask for help when beyond scope, but state clearly what you need.
Rule 3: Never Claim Without Evidence
| |
“Feature is complete” doesn’t count as complete. “Feature is complete” with output verification does.
Rule 4: Never Repeat Failed Methods
| |
Rule 5: Every Task Has a Clear Status
Task endings must be one of three:
- ✅ Done: with verification output
- ❌ Blocked: specific reason + what’s needed to continue
- 🔄 In Progress: next step explained
No such thing as “should be fine.”
Installing YES Discipline Engine
This rule set is plain text and can be added directly to any agent’s system prompt.
Claude Code / Claude API:
Add this rules block in CLAUDE.md or system prompt:
| |
OpenClaw:
Add rules to the <anti-slack> block in SOUL.md, or load as a standalone skill (skills/yes-en/SKILL.md).
What Actually Changes?
After installation, the most obvious change isn’t the agent becoming “smarter”—it’s that you no longer need to be the middleman.
Previous flow:
- You ask a question
- Agent gives suggestions
- You verify manually
- You come back and tell the agent the result
- Agent gives next suggestion
- Repeat
With YES Engine:
- You give a task
- Agent runs through to the end, with full output
- You decide next step based on results
The difference is steps 3-5 vanish from your todo list.
One Note
YES Discipline Engine makes agents more proactive—they’ll run commands, read files, modify code. This means you need clear boundaries:
allowed_files: which files the agent can modify- Which operations need confirmation (deployments, DB changes, external publishing)
“Proactive” without boundaries becomes a problem. YES Engine assumes the agent has a clear scope—autonomous within scope, asking outside of it.
If your agent still says “you should manually verify,” try adding these 5 rules to its system prompt and see if the next task goes differently.
$14.90 · 8 chapters + 6 templates
Learn More →
Further Reading
- AI Self-Review Pipeline: How We Make Agents Review Their Own Code Before Sending PRs — A more complete agent quality control pipeline design
- Building an AI Multi-Agent Team from Zero: Our Real Experience and Pitfalls — Real problems encountered when building agent teams
- AI Agent vs Traditional Trading Bot: Differences and How to Choose — Another perspective on the essential difference in agent autonomy