
Modern performance reviews are delayed, subjective, and reactive. Managers often realize someone is struggling only after deadlines slip or burnout appears, while engineers feel their work is reduced to shallow metrics or forgotten entirely. Existing tools show activity, but they do not explain what it means or how to act on it. This project introduces an AI performance intelligence system that analyzes real work signals over time and turns them into explainable insights through structured agentic debate. Instead of a single AI judgment, multiple agents argue opposing, evidence-backed interpretations of an employee’s work patterns. An arbiter agent then synthesizes these perspectives into balanced conclusions. The output is not a score, but targeted questions and prompts. Engineers receive reflective check-ins grounded in their actual work. Managers receive concrete coaching questions and follow-ups backed by evidence. The system’s core value is replacing opaque evaluation with transparent reasoning and better conversations.
7 Feb 2026