
WritenDraw is an agentic AI simulation platform that puts junior developers through realistic production incidents to bridge the gap between learning to code and working in a real team. The core innovation is the agentic workflow: Google Gemini 2.0 Flash orchestrates the entire simulation through three autonomous agents: AGENTIC EVALUATION: Every step requires free-text responses (no multiple choice). Gemini evaluates each response against per-step rubrics, scoring reasoning 0-15. The AI adapts feedback based on accumulated performance. AGENTIC MENTORING: The AI mentor maintains persistent context, tracking understanding level, chat count, and time pressure. Early messages: patient, asks "what do you think?" By message 7+: "just write it up." The agent autonomously decides how much help to give. AGENTIC AUDIT: The system logs every response, chat message, code submission, and score β creating a complete picture of how a developer thinks through a crisis. The AI continuously assesses and adapts. The simulation drops you into a P1 incident at ShopRight (fictional UK supermarket). You join a standup, read a Jira ticket, investigate messy code with no hints, chat with the AI mentor, write a fix, respond to code review, create a deployment plan, and contribute to a retro. Paste is disabled β Key insight: explanation scores higher than code (10 vs 5 points). Wrong code with a great explanation beats perfect code with no explanation β because in real teams, communication matters as much as code. Built on the author's published research β "TrueSkills: AI-Resistant Assessment Through Personalized Understanding Validation" (SSRN, 2025, DOI: 10.2139/ssrn.5674130) β which demonstrated that AI-resistant assessment requires evaluating understanding rather than recall. WritenDraw takes this further: testing how developers think under realistic production pressure. Built with Python/Flask, Google Gemini 2.0 Flash, CodeMirror, Pyodide, and Docker.
15 Feb 2026