Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Solo Tech

Solo Tech is a cutting-edge platform that provides on-device artificial intelligence (AI) solutions. It stands out by emphasizing data privacy, efficient performance, and user control without relying on cloud-based services. This technology facilitates the integration of AI capabilities directly onto user devices, allowing for faster, private, and customized AI experiences.

General
AuthorSolo Tech
Websitehttps://www.getsolo.tech/
Link to the official source of APISolo Tech Developer API
TypeOn-device AI platform

Key Features

  • On-Device AI Processing: Runs AI models locally, eliminating the need for continuous internet connectivity and preserving data privacy.

  • Enhanced Data Privacy: Users retain full ownership of their data, as all processing is done locally on their devices.

  • Model Customization: Offers the ability to fine-tune and tailor AI models according to specific user needs and use cases.

  • Offline Functionality: Ensures that AI operations remain functional even in environments with limited or no internet access.

  • High-Efficiency Performance: Optimized for speed and effective operation across various devices, enhancing user experience and productivity.

Use Cases

  • Enterprise Solutions: Deploys predictive analytics, customer insights, and process optimization tools for business growth and operational efficiency.

  • GovTech Applications: Assists in data analysis, public service enhancements, and informed decision-making processes for governmental organizations.

  • Healthcare Innovations: Powers patient data analytics, diagnostic tools, and the development of personalized treatment plans.

  • Education Sector: Facilitates personalized learning experiences and supports administrative functions through AI-driven solutions.

Get Started Building with Solo Tech

To begin developing with Solo Tech, visit the official Solo Tech website for comprehensive guides, API documentation, and support resources. Integrating on-device AI into your projects ensures enhanced data privacy, reliability, and custom AI functionalities tailored to your needs.

Solo Tech AI technology page Hackathon projects

Discover innovative solutions crafted with Solo Tech AI technology page, developed by our community members during our engaging hackathons.

Ladybug - The Robot Reader

Ladybug - The Robot Reader

There are 240 million children worldwide living with learning disabilities, and many struggle to access physical books independently. Ladybug: The Robot Reader was built to change that. Ladybug is an autonomous robotic system that reads physical books aloud from cover to cover with no human intervention. Built on the SO-101 robotic arm, it uses a perception-action loop powered by Claude Vision to assess the workspace, decide what to do next, and execute — opening a closed book, reading each page spread, turning pages, and closing the book when finished. Claude Vision analyzes camera frames to classify page types (content, title page, table of contents, index, blank) and extract text. ElevenLabs then streams natural-sounding speech in real time using a sentence-level prefetch pipeline so audio plays continuously without pauses. Motor skills — opening, closing, and page turning — are trained using ACT (Action Chunking with Transformers) policies. The system includes intelligent retry logic with frame hashing to detect failed page turns and automatically retry them. Ladybug supports multiple reading modes: verbose (reads everything), skim (headers and titles only), and silent (text extraction only). It also features a web dashboard for remote monitoring and a dry-run mode for testing without hardware. Our mission is accessibility in education — putting an autonomous reading companion in every special education classroom. We want 1,000,000 lady bug robot readers available to children around the world.

Communication Bridge AI

Communication Bridge AI

Communication Bridge AI is an intelligent platform that breaks down communication barriers between verbal and non-verbal individuals using cutting-edge AI technology. The system features real-time gesture recognition powered by MediaPipe, supporting 18 hand gestures including ASL signs like "I Love You," "Thank You," and "Help." Our multi-agent AI architecture includes Intent Detection, Gesture Interpretation, Speech Generation, and Context Learning agents coordinated by a central orchestrator. The platform offers bidirectional communication: verbal users can speak or type messages that are translated into gesture emojis, while non-verbal users can make hand gestures captured via webcam that are interpreted and converted to natural language responses. Built with Python FastAPI backend and vanilla JavaScript frontend, the system integrates Google Gemini AI for context-aware responses and MediaPipe for computer vision. Key features include user authentication with JWT, conversation history, three input methods (webcam, speech-to-text, text), and a freemium model with credit-based usage. The platform addresses a critical need for 466 million people with hearing loss and 70 million sign language users worldwide. Primary use cases include special education classrooms, healthcare patient communication, workplace accessibility, and family connections. The system achieves 92% gesture recognition accuracy with sub-500ms response times. Technical highlights: deployed on cloud infrastructure (Brev/Vultr), SQLite database for persistence, responsive web interface, and production-ready with systemd services and Nginx reverse proxy. The project demonstrates practical AI application for social good, aligning with UN Sustainable Development Goals for Quality Education and Reduced Inequalities. Live demo available at: https://3001-i1jp0gsn9.brevlab.com GitHub: https://github.com/Hlomohangcue/Communication-Agent-AI