Multi-Agent Arbitration for Conflicting Code Diffs
When two agents propose incompatible diffs, blindly picking the latest output is risky. Arbitration needs structured scoring and conflict-aware merge strategy.
Step 1: normalize agent proposals into comparable artifacts
{
"agent": "planner",
"files_changed": 4,
"tests_passed": 27,
"risk_flags": ["migration"]
}
Step 2: apply weighted scoring rules
score = 0.5 * tests + 0.2 * lint + 0.2 * safety - 0.1 * risk_flags
Step 3: route close-score conflicts to human review
if abs(score_a - score_b) < 0.05:
return "manual_review"
Pitfall
Optimizing for speed only. Fast merges with unresolved semantic conflicts create expensive regressions.