Summary
Few-Shot Prompting provides the LLM with a few examples of the desired input-output behavior before asking it to solve a new problem. This technique leverages the model's ability to recognize patterns and adapt to specific tasks without explicit instructions. It's particularly effective for tasks with consistent formats or when you want the model to follow a specific style or approach.
Implementation
- Select diverse, representative examples that cover the range of inputs and outputs you expect.
- Use a consistent format for all examples, with clear separation between input and output.
- Order matters - place examples in increasing order of complexity or arrange them to highlight important patterns.
- Include 3-5 examples for most tasks; more complex tasks may benefit from additional examples.
- Match the format of your examples exactly when presenting the new problem to solve.
Example selection strategies
- Random: Simple baseline selection
- Semantic: Examples similar to query
- Diversity: Cover different categories
- Difficulty: Start simple, increase complexity
- Automated: Use embedding similarity for retrieval
Best for
- Classification tasks with clear categories
- Translation between formats or languages
- Summarization with specific style requirements
- Style transfer tasks