1
1
1 year of experience
Hi, I'm a freshman at Northeastern studying Computer Science, with a strong interest in AI/ML. Previously, I worked as a Junior Programmer Analyst at Indus Consultancy Services, where I developed Python-based solutions for a cancer detection platform serving 10,000+ users. I migrated legacy ASP code to modern Python architecture, improving system performance for processing 500+ medical images daily, and mentored a junior developer on programming best practices. Currently, I'm an Undergraduate Research Assistant in the ArchMedes Club and the City of Oakland. I've developed testing frameworks to investigate hallucination patterns in AI systems, improved the NEULIT AI-generated text detector performance by experimenting with varying neural network architectures, and optimized prompt engineering for LLM-based City Council meeting summarization by systematically testing 8 prompt variations across 10 evaluation criteria. This research experience has taught me how to rigorously evaluate and improve AI systems—from designing testing frameworks to systematically measuring model performance across multiple metrics. I've learned that building reliable AI isn't just about training models; it's about understanding their failure modes, optimizing their outputs, and ensuring they perform consistently in real-world applications. These skills in evaluation methodology and performance optimization are exactly what I want to apply in AI/ML engineering roles. I'm looking to take what I've learned in research, systematic evaluation, performance optimization, and reliability engineering, and apply it to real-world AI/ML systems that impact millions of users. If you're interested in building reliable AI systems, scaling ML infrastructure to production, or collaborating on tech that makes AI more trustworthy, I'd love to connect.

"A second grader gets pushed at recess. She doesn't tell her teacher — she's embarrassed. She doesn't tell her parents — she doesn't want to worry them. By the time an adult notices, it's been three weeks. This happens in every school, every day. Tattle Turtle exists so no kid carries that alone. Tammy the Tattle Turtle is an AI emotional support companion running on a simulated Reachy Mini robot. Students walk up and talk to Tammy through voice. She listens, validates, and asks one gentle question at a time — max 15 words, non-leading language, strict boundaries between emotional triage and treatment. What makes Tattle Turtle different is what happens beneath the conversation. Every exchange is classified in real time into GREEN, YELLOW, or RED urgency. A bad grade vent stays GREEN — private. Recess exclusion mentioned three times this week? YELLOW — a pattern surfaces on the teacher dashboard that no human could track across 25 students. A student mentions being hit? Immediate RED alert — timestamp, summary, and next steps pushed to the teacher. The system comes to them when it matters. We built this on three sponsor technologies. Google DeepMind's Gemini API powers the conversational engine with structured JSON for severity and emotion tags. Reachy Mini's SDK provides robot simulation through MuJoCo with expressive head movements and audio I/O. Hugging Face Spaces serves as the deployment layer — one-click installable on any Reachy Mini in any classroom. Tammy's prompt engineering uses a layered 5-step framework ensuring she never crosses clinical boundaries, never suggests emotions to students, and never stores identifiable data. Privacy isn't a feature — it's a constraint baked into every layer. Tattle Turtle fills the gap between a child's worst moment and an adult's awareness. One robot. Every classroom. No kid left unheard."
15 Feb 2026