Sequential Prompt Chain

Easy
Agents

Implement a sequential prompt chaining system where:

  1. Each prompt template can use {input} and {context} placeholders
  2. Output of step N becomes input for step N+1
  3. Context accumulates across steps (store each output in context by step index)
  4. Use mock_llm() to simulate LLM calls

Template Format:

  • "Summarize: {input}"
  • "Review: {input} | History: {context}"

Context Structure: {'step_0': output0, 'step_1': output1, ...}

Examples

Example 1:
Input: prompt_chain('Hello world', ['Echo: {input}', 'Upper: {input}'])
Output: {'final_output': '[Processed: Upper: [Processed: Ec...]', 'intermediate_outputs': ['[Processed: Echo: Hello world...]'], 'final_context': {'step_0': '[Processed: Echo: Hello world...]'}}
Explanation: Chain processes through two prompts, accumulating context

Starter Code

def prompt_chain(initial_input, prompts, context=None):
    """
    Execute a chain of prompts sequentially, passing output of each
    as input to the next (with optional context accumulation).
    
    Args:
        initial_input: Starting input string
        prompts: List of prompt templates with {input} and {context} placeholders
        context: Optional dict of accumulated context
    
    Returns:
        dict with 'final_output', 'intermediate_outputs', 'final_context'
    """
    # Simulate LLM call with lambda (in real use, call actual LLM)
    def mock_llm(prompt):
        return f"[Processed: {prompt[:30]}...]"
    
    # Your implementation here
    pass
Lines: 1Characters: 0
Ready
The AI Interview - Master AI/ML Interviews