Agent Self-Reflection and Self-Critique

Easy
Agents

Reflection and Self-Critique

Reflection agents evaluate their own outputs and iteratively improve them.

Task

Implement a ReflectiveAgent that:

  1. Generates an initial response.
  2. Critiques it against a list of criteria (e.g., ['concise', 'accurate', 'professional']).
  3. Iterates until criteria are satisfied or max iterations reached.
  4. Logs all reflections.

Constraints

  • Criteria are strings; treat them as keyword checks on the response.
  • Stop early if critique returns no issues.
  • Max iterations default: 3.

Examples

Example 1:
Input: agent = ReflectiveAgent() agent.reflect_and_improve('Explain AI', ['concise'])
Output: A refined response meeting the 'concise' criterion.
Explanation: Agent iterates until the critique passes or max iterations hit.

Starter Code

from typing import List

class ReflectiveAgent:
    def __init__(self):
        self.reflections: List[str] = []

    def generate_response(self, prompt: str) -> str:
        # Simulated LLM call
        return f'Response to: {prompt}'

    def critique(self, response: str, criteria: List[str]) -> str:
        # TODO: Check response against criteria, return critique
        pass

    def reflect_and_improve(self, prompt: str, criteria: List[str], max_iterations: int = 3) -> str:
        # TODO: Loop generate → critique → improve until criteria met
        pass
Lines: 1Characters: 0
Ready
The AI Interview - Master AI/ML Interviews