Implement a sequential prompt chaining system where:
- Each prompt template can use
{input}and{context}placeholders - Output of step N becomes input for step N+1
- Context accumulates across steps (store each output in context by step index)
- Use mock_llm() to simulate LLM calls
Template Format:
"Summarize: {input}""Review: {input} | History: {context}"
Context Structure:
{'step_0': output0, 'step_1': output1, ...}
Examples
Example 1:
Input:
prompt_chain('Hello world', ['Echo: {input}', 'Upper: {input}'])Output:
{'final_output': '[Processed: Upper: [Processed: Ec...]', 'intermediate_outputs': ['[Processed: Echo: Hello world...]'], 'final_context': {'step_0': '[Processed: Echo: Hello world...]'}}Explanation: Chain processes through two prompts, accumulating context
Starter Code
def prompt_chain(initial_input, prompts, context=None):
"""
Execute a chain of prompts sequentially, passing output of each
as input to the next (with optional context accumulation).
Args:
initial_input: Starting input string
prompts: List of prompt templates with {input} and {context} placeholders
context: Optional dict of accumulated context
Returns:
dict with 'final_output', 'intermediate_outputs', 'final_context'
"""
# Simulate LLM call with lambda (in real use, call actual LLM)
def mock_llm(prompt):
return f"[Processed: {prompt[:30]}...]"
# Your implementation here
passPython3
ReadyLines: 1Characters: 0
Ready