Most agent sessions fail because we mix discovery and implementation in one pass. The model keeps changing direction mid-run.
I now split work into three steps: research, plan, implement. It looks slower. It ships faster.
Step 1: research only
Read files, map constraints, find existing patterns. No edits yet. This keeps the model from guessing architecture.
I ask for references with file paths and exact commands. Evidence first.
Step 2: plan in writing
Turn findings into a short execution plan with risk notes and test impact. This becomes durable context for the next run.
That document is part of context engineering. It reduces drift when sessions reset.
Step 3: implement with checks
Now edit code, run tests, fix failures. Keep the loop tight: change, verify, commit.
If quality drops, clear context and continue from the written plan instead of arguing in chat.
Why it works
Separating thinking modes lowers contradiction in the prompt. The model does not have to infer whether it should explore or execute.
You get less thrashing, fewer reverts, and clearer commit history.