Reflection and Self-Critique
Reflection agents evaluate their own outputs and iteratively improve them.
Task
Implement a ReflectiveAgent that:
- Generates an initial response.
- Critiques it against a list of criteria (e.g.,
['concise', 'accurate', 'professional']). - Iterates until criteria are satisfied or max iterations reached.
- Logs all reflections.
Constraints
- Criteria are strings; treat them as keyword checks on the response.
- Stop early if critique returns no issues.
- Max iterations default: 3.
Examples
Example 1:
Input:
agent = ReflectiveAgent()
agent.reflect_and_improve('Explain AI', ['concise'])Output:
A refined response meeting the 'concise' criterion.Explanation: Agent iterates until the critique passes or max iterations hit.
Starter Code
from typing import List
class ReflectiveAgent:
def __init__(self):
self.reflections: List[str] = []
def generate_response(self, prompt: str) -> str:
# Simulated LLM call
return f'Response to: {prompt}'
def critique(self, response: str, criteria: List[str]) -> str:
# TODO: Check response against criteria, return critique
pass
def reflect_and_improve(self, prompt: str, criteria: List[str], max_iterations: int = 3) -> str:
# TODO: Loop generate → critique → improve until criteria met
pass
Python3
ReadyLines: 1Characters: 0
Ready