Summary
The Self-Critique pattern enables LLMs to evaluate their own outputs against criteria and iteratively improve them. A generate-critique-refine cycle allows agents to identify weaknesses, understand errors, and produce higher-quality results. This reflection capability is essential for tasks requiring accuracy and reliability.
How it works
- Generate: Produce initial output
- Critique: Evaluate output against explicit criteria
- Identify Issues: Flag specific weaknesses or errors
- Refine: Generate improved version addressing critiques
- Repeat: Continue until quality threshold met
Evaluation criteria
- Accuracy: Are facts correct? Is reasoning valid?
- Completeness: Did I address all requirements?
- Coherence: Is the response well-organized?
- Safety: Are there harmful or biased elements?
- Usefulness: Is the response actionable?
When to use
- Tasks requiring high accuracy (code, math, facts)
- Content generation with strict quality standards
- Scenarios where self-correction adds value
- Applications where multiple iterations are acceptable