Design an Agent with Full Conflict Resolution

Hard
Agents

Multi-Agent Conflict Resolution

When multiple agents produce conflicting proposals, a resolution mechanism is needed.

Task

Implement a ConflictResolver handling 4 conflict types:

  1. Factual: Agents disagree on facts → confidence-weighted majority vote.
  2. Resource: Agents compete for same resource → priority + timestamp ordering.
  3. Priority: Agents rank tasks differently → authority-weighted vote.
  4. Decision: Agents recommend different actions → Borda count or LLM judge.

Non-Functional Requirements

  • Resolution time < 500ms for non-LLM strategies.
  • All resolutions logged with reasoning.
  • Support for human-in-the-loop escalation when confidence < 0.6.
  • Borda count must be deterministic for reproducibility.

Constraints

  • Confidence-weighted vote: sum(confidence × vote) / sum(confidence).
  • Resource conflict: earliest timestamp wins on tie.
  • LLM judge fallback only when Borda count produces a tie.

Examples

Example 1:
Input: proposals = [ AgentProposal('a1', {'fact': 'Paris'}, 0.9, 'cited sources'), AgentProposal('a2', {'fact': 'London'}, 0.4, 'memory'), ] conflict = Conflict('c1', ConflictType.FACTUAL, proposals, {}) resolver.resolve(conflict)
Output: {'resolution': 'Paris', 'method': 'confidence_weighted', 'confidence': 0.9}
Explanation: Paris wins with 0.9 confidence vs London's 0.4.

Starter Code

from typing import List, Dict, Any, Optional, Tuple
from dataclasses import dataclass, field
from enum import Enum

class ConflictType(Enum):
    FACTUAL = 'factual'         # Agents disagree on facts
    RESOURCE = 'resource'       # Multiple agents want same resource
    PRIORITY = 'priority'       # Task priority disagreement
    DECISION = 'decision'       # Different recommended actions

@dataclass
class AgentProposal:
    agent_id: str
    proposal: Dict
    confidence: float
    reasoning: str
    evidence: List[str] = field(default_factory=list)
    timestamp: float = 0.0

@dataclass
class Conflict:
    conflict_id: str
    conflict_type: ConflictType
    proposals: List[AgentProposal]
    context: Dict
    resolution: Optional[Dict] = None

class ConflictResolver:
    def __init__(self, llm_judge_fn: callable = None):
        self.llm_judge_fn = llm_judge_fn
        self.resolution_history: List[Dict] = []
        self.strategies = {
            ConflictType.FACTUAL: self._resolve_factual,
            ConflictType.RESOURCE: self._resolve_resource,
            ConflictType.PRIORITY: self._resolve_priority,
            ConflictType.DECISION: self._resolve_decision,
        }

    def detect_conflict(self, proposals: List[AgentProposal]) -> Optional[Conflict]:
        pass

    def resolve(self, conflict: Conflict) -> Dict:
        # TODO: Route to appropriate strategy
        pass

    def _resolve_factual(self, conflict: Conflict) -> Dict:
        # TODO: Majority vote + confidence weighting
        pass

    def _resolve_resource(self, conflict: Conflict) -> Dict:
        # TODO: Priority-based or time-based allocation
        pass

    def _resolve_priority(self, conflict: Conflict) -> Dict:
        # TODO: Weighted voting by agent authority level
        pass

    def _resolve_decision(self, conflict: Conflict) -> Dict:
        # TODO: LLM judge or Borda count
        pass

    def _borda_count(self, proposals: List[AgentProposal], options: List[str]) -> str:
        # TODO: Borda count voting on options
        pass
Lines: 1Characters: 0
Ready
The AI Interview - Master AI/ML Interviews