lablab.ai logo - Community innovating and building with artificial intelligence
AI HackathonsAI AppsAI TechAI TutorialsAI ArticlesSurgeSponsor

Footer navigation

Community innovating and building with artificial intelligence

Unlocking state-of-the-art artificial intelligence and building with the world's talent

  • Instagram
  • Reddit
  • Twitter/X
  • GitHub
  • Discord
  • HackerNoon

Other group brands:

https://nativelyai.comhttps://surge.lablab.ai/
Links
  • AI Tech
  • AI Hackathons
  • AI Tutorials
  • AI Applications
  • Surge
  • AI Articles
lablab
  • About
  • Brand
  • Hackathon Guidelines
  • Terms of Use
  • Code of Conduct
  • Privacy Policy
Get in touch
  • Discord
  • Sponsor
  • Cooperation
  • Contribute
  • [email protected]

© 2026 NativelyAI Inc. All rights reserved.

2.5.0

traviipatii

Travis Peng@traviipatii

1

Events attended

1

Submissions made

4+ years of experience

Socials

🤝 Top Collaborators

deselby353 img

james cunningham

pranav_kishore406 img

Pranav Kishore

Hi, I'm a freshman at Northeastern studying Computer Science, with a strong interest in AI/ML. Previously, I worked as a Junior Programmer Analyst at Indus Consultancy Services, where I developed Python-based solutions for a cancer detection platform serving 10,000+ users. I migrated legacy ASP code to modern Python architecture, improving system performance for processing 500+ medical images daily, and mentored a junior developer on programming best practices. Currently, I'm an Undergraduate Research Assistant in the ArchMedes Club and the City of Oakland. I've developed testing frameworks to investigate hallucination patterns in AI systems, improved the NEULIT AI-generated text detector performance by experimenting with varying neural network architectures, and optimized prompt engineering for LLM-based City Council meeting summarization by systematically testing 8 prompt variations across 10 evaluation criteria. This research experience has taught me how to rigorously evaluate and improve AI systems—from designing testing frameworks to systematically measuring model performance across multiple metrics. I've learned that building reliable AI isn't just about training models; it's about understanding their failure modes, optimizing their outputs, and ensuring they perform consistently in real-world applications. These skills in evaluation methodology and performance optimization are exactly what I want to apply in AI/ML engineering roles. I'm looking to take what I've learned in research, systematic evaluation, performance optimization, and reliability engineering, and apply it to real-world AI/ML systems that impact millions of users. If you're interested in building reliable AI systems, scaling ML infrastructure to production, or collaborating on tech that makes AI more trustworthy, I'd love to connect.

🤓 Latest Submissions

    Tattle Turtle

    Tattle Turtle

    "A second grader gets pushed at recess. She doesn't tell her teacher — she's embarrassed. She doesn't tell her parents — she doesn't want to worry them. By the time an adult notices, it's been three weeks. This happens in every school, every day. Tattle Turtle exists so no kid carries that alone. Tammy the Tattle Turtle is an AI emotional support companion running on a simulated Reachy Mini robot. Students walk up and talk to Tammy through voice. She listens, validates, and asks one gentle question at a time — max 15 words, non-leading language, strict boundaries between emotional triage and treatment. What makes Tattle Turtle different is what happens beneath the conversation. Every exchange is classified in real time into GREEN, YELLOW, or RED urgency. A bad grade vent stays GREEN — private. Recess exclusion mentioned three times this week? YELLOW — a pattern surfaces on the teacher dashboard that no human could track across 25 students. A student mentions being hit? Immediate RED alert — timestamp, summary, and next steps pushed to the teacher. The system comes to them when it matters. We built this on three sponsor technologies. Google DeepMind's Gemini API powers the conversational engine with structured JSON for severity and emotion tags. Reachy Mini's SDK provides robot simulation through MuJoCo with expressive head movements and audio I/O. Hugging Face Spaces serves as the deployment layer — one-click installable on any Reachy Mini in any classroom. Tammy's prompt engineering uses a layered 5-step framework ensuring she never crosses clinical boundaries, never suggests emotions to students, and never stores identifiable data. Privacy isn't a feature — it's a constraint baked into every layer. Tattle Turtle fills the gap between a child's worst moment and an adult's awareness. One robot. Every classroom. No kid left unheard."

    Hackathon link

    15 Feb 2026

👌 Attended Hackathons

    Launch and Fund Your Own Startup-Edition 1

    Launch and Fund Your Own Startup-Edition 1

    Join our $1,000,000+ startup challenge series powered by Surge 📌 This announcement outlines the launch of a Global Human+AI Exchange — aligning universities, global cities, talent networks, and industry to accelerate human-centered AI innovation, technology transfer, and economic growth. 👉 Read more here ⏱️ 8 days to turn your idea, or existing product, into an investable demo. 📅 February 6 - 15, 2026 • Feb 6 - 14 (Online Phase) - Collaborate and build online with developers and AI innovators from around the world. All projects must be submitted by the end of the online phase on February 14th. 🕙 Doors open at 10:00 AM, first come, first served. • Feb 14 (On‑site Build Day) - Selected participants will be invited to an exclusive in‑person session to refine their projects and connect directly with mentors. • Feb 15 (On‑site Demos & Awards) - Live pitching sessions to a panel of judges and ecosystem partners, followed by the official winner announcement. 🌟 Get feedback from startup mentors and technical experts. 📍 On-site Venue (Feb 14–15): MindsDB SF AI Collective 3154 17th St, San Francisco, California, USA 📲 Real-time on-site updates (SF) For real-time announcements and information during the on-site portion (Feb 14–15), join the LabLab SF Chapter WhatsApp group: 👉 Join the WhatsApp group 🤝 Join solo or with a team. New founders and existing startups are welcome. 🏆 $1,000,000+ in credits, token prizes, perks and funding opportunity. 📍 On-site participation is by invitation only. Travel and accommodation expenses will not be covered. 🧑‍💻 Apply now to build, validate, and pitch with purpose.

📝 Certificates

    Launch and Fund Your Own Startup-Edition 1

    Launch and Fund Your Own Startup-Edition 1 | Certificate

    View Certificate