1
1
1 year of experience
AI/ML engineer and startup founder with Masters Studies in Data Science (AI specialization) at Northwestern. Founder of Ladybug Robotics & ClariTrace. Builds at the intersection of AI and physical systems - most recently architecting a multi-modal intelligence layer integrating computer vision, LLMs, text-to-speech, and robotic control. NIST GenAI standards contributor. Former Developer Advocate at Neo4j. Protocol Labs Award winner, Physical AI Hackathon 2026.

There are 240 million children worldwide living with learning disabilities, and many struggle to access physical books independently. Ladybug: The Robot Reader was built to change that. Ladybug is an autonomous robotic system that reads physical books aloud from cover to cover with no human intervention. Built on the SO-101 robotic arm, it uses a perception-action loop powered by Claude Vision to assess the workspace, decide what to do next, and execute โ opening a closed book, reading each page spread, turning pages, and closing the book when finished. Claude Vision analyzes camera frames to classify page types (content, title page, table of contents, index, blank) and extract text. ElevenLabs then streams natural-sounding speech in real time using a sentence-level prefetch pipeline so audio plays continuously without pauses. Motor skills โ opening, closing, and page turning โ are trained using ACT (Action Chunking with Transformers) policies. The system includes intelligent retry logic with frame hashing to detect failed page turns and automatically retry them. Ladybug supports multiple reading modes: verbose (reads everything), skim (headers and titles only), and silent (text extraction only). It also features a web dashboard for remote monitoring and a dry-run mode for testing without hardware. Our mission is accessibility in education โ putting an autonomous reading companion in every special education classroom. We want 1,000,000 lady bug robot readers available to children around the world.
15 Feb 2026