Prompt Chaining
Prompt chaining connects multiple LLM calls where the output of one becomes the input to the next.
Task
Implement a PromptChain class that:
- Allows adding callable steps (functions or lambdas).
- Executes steps sequentially, passing output to the next step.
- Returns the final output.
- Captures intermediate outputs for debugging.
Constraints
- Steps must be callable.
- Empty chain should return the input unchanged.
- Each step's output must be a string.
Examples
Example 1:
Input:
chain = PromptChain()
chain.add_step(lambda x: x.upper())
chain.add_step(lambda x: f'Result: {x}')
chain.run('hello')Output:
'Result: HELLO'Explanation: Step 1 uppercases, step 2 prepends 'Result: '.
Starter Code
from typing import List, Callable
class PromptChain:
def __init__(self):
self.steps: List[Callable[[str], str]] = []
def add_step(self, fn: Callable[[str], str]) -> 'PromptChain':
# TODO: Add a transformation step
pass
def run(self, initial_input: str) -> str:
# TODO: Execute chain sequentially
pass
Python3
ReadyLines: 1Characters: 0
Ready