Back to AI Recipes

Prompt Chaining

Workflow

Workflow

LLM calls are sequenced with conditional branching to refine outputs.

Summary

This workflow demonstrates a prompt chaining pattern where LLM calls are sequenced with conditional branching. The initial LLM call feeds into a gate that determines which subsequent path to take, allowing for dynamic workflow adaptation based on intermediate results.

How it works

  1. Initial Generation: First LLM call produces an initial output
  2. Gate Evaluation: A model or logic layer evaluates quality/conditions
  3. Conditional Branching: Decision point routes to different continuation paths
  4. Iterative Refinement: Selected branch continues with additional LLM calls
  5. Final Output: Chained calls converge to produce the result

When to use

  • Simple linear steps: Straight chain without gates
  • Quality-sensitive outputs: Gate with rejection path
  • Multiple valid approaches: Conditional branching
  • Variable depth tasks: Adaptive chain length
Code View

Prompt Chaining Implementation

// Prompt Chaining recipe using OpenAI
// Install: bun add openai

import OpenAI from "openai";

async function main() {
  const input = "Add your prompt here.";
  const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
  const system = "You are a senior AI engineer and technical writer. Apply the agent recipe to structure the reasoning and produce a useful result. Recipe: Prompt Chaining. Description: LLM calls are sequenced with conditional branching to refine outputs. Focus: Workflow Provide actionable, implementation-ready guidance.";
  const user = `Request: ${input}`;

  const openaiResponse = await openai.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [
      { role: "system", content: system },
      { role: "user", content: user },
    ],
  });

  const openaiText = openaiResponse.choices[0]?.message?.content?.trim() ?? "";

  console.log(openaiText);
}

main().catch((error) => {
  console.error(error);
  process.exitCode = 1;
});