Iteration & Refinement
Learn the systematic process for evaluating and refining MOTIVE prompts through structured iteration cycles.
Generate
Submit your MOTIVE-structured prompt and receive the initial output from the AI system.
Evaluate
Score the output against your Evaluation criteria (E component). Use the defined scoring scale and thresholds.
Diagnose
If any criterion falls below threshold, identify which MOTIVE component contributed to the gap. Map the weakness back to M, O, T, I, or V.
Revise
Refine the specific component identified in the diagnosis. Do not rewrite the entire prompt — target the root cause.
Regenerate
Submit the revised prompt and evaluate again. Continue until all criteria meet threshold or maximum revision cycles are reached.
Diagnosis Mapping
When output quality drops, trace the issue to the responsible component.
Can a colleague understand why this task matters from the prompt alone?
Could you sketch the structure of the expected output before generation?
Would a domain expert recognize the chosen methodology as appropriate?
Could someone else follow these steps and reach a similar output?
If you removed the variables, would the output still meet your needs?
Does each evaluation criterion trace back to a specific component?
Best Practices
- 1. Set a maximum number of revision cycles (typically 2-3) to prevent infinite loops.
- 2. Revise only the diagnosed component, not the entire prompt. Targeted refinement preserves what already works.
- 3. Document each iteration cycle. Tracking changes builds institutional knowledge about effective prompt patterns.
- 4. If multiple criteria fail, prioritize the component closest to M (Motivation) first — upstream fixes often resolve downstream issues.
- 5. Share successful iteration patterns with your team to build collective prompt engineering fluency.