Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

AWS

Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Since its launch in 2006, AWS has provided a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries worldwide. AWS is a leader in cloud computing, offering services that span compute, storage, databases, networking, analytics, machine learning, artificial intelligence, Internet of Things (IoT), security, and enterprise applications.

General
CompanyAmazon Web Services, Inc.
Founded2006
Websitehttps://aws.amazon.com/
Documentationhttps://docs.aws.amazon.com/
Technology TypeCloud Provider

Start Building with AWS Products

AWS provides a vast array of services that enable developers and businesses to build sophisticated, scalable applications in the cloud. From foundational services like compute and storage to advanced machine learning and AI capabilities, AWS offers the tools needed to innovate and grow.

AWS SageMaker

AWS SageMaker is a fully managed machine learning service that enables developers to quickly and easily build, train, and deploy machine learning models at scale. You can find more information on our AWS SageMaker tech page.

AWS Kiro

Kiro is an AWS-powered agentic coding service that uses "spec-driven development" to turn prompts into code and tests. You can find more information on our AWS Kiro tech page.


AWS AI Technologies Hackathon projects

Discover innovative solutions crafted with AWS AI Technologies, developed by our community members during our engaging hackathons.

Adaptifleet

Adaptifleet

Traditional warehouse automation has improved efficiency, yet many systems remain rigid, expensive, and difficult to adapt when workflows or layouts change. Even small adjustments often require specialized expertise or time-consuming reprogramming. This creates a disconnect between what operators need robots to do and how easily they can communicate those needs — a challenge we call the “Human Intent Gap.” AdaptiFleet was designed to close this gap by enabling intuitive, AI-driven fleet control. Instead of relying on complex interfaces or predefined scripts, users interact with autonomous robots using natural language. Commands such as “Get me three bags of chips and a cold drink” are interpreted and translated into structured robotic tasks automatically. At its core, AdaptiFleet leverages Gemini-powered Vision Language Models (VLMs) to understand user intent and visual context. Robots operate within a dynamic decision framework, allowing them to adapt to changing environments rather than follow rigid, pre-programmed routes. The platform integrates a digital twin simulation stack built on Isaac Sim, enabling teams to validate behaviors, test workflows, and optimize multi-robot coordination before live deployment. Once deployed, ROS2 and Nav2 provide robust navigation, dynamic path planning, and collision avoidance. The VLM orchestration layer continuously analyzes visual inputs to support scene understanding, anomaly detection, and proactive hazard awareness. When conditions change, AdaptiFleet autonomously re-plans routes and tasks, reducing downtime and operational disruption. By combining conversational interaction, real autonomy, and simulation-driven validation, AdaptiFleet simplifies robotic deployments while improving efficiency and visibility. The result is an automation system that is adaptive, scalable, and aligned with how people naturally work.

Communication Bridge AI

Communication Bridge AI

Communication Bridge AI is an intelligent platform that breaks down communication barriers between verbal and non-verbal individuals using cutting-edge AI technology. The system features real-time gesture recognition powered by MediaPipe, supporting 18 hand gestures including ASL signs like "I Love You," "Thank You," and "Help." Our multi-agent AI architecture includes Intent Detection, Gesture Interpretation, Speech Generation, and Context Learning agents coordinated by a central orchestrator. The platform offers bidirectional communication: verbal users can speak or type messages that are translated into gesture emojis, while non-verbal users can make hand gestures captured via webcam that are interpreted and converted to natural language responses. Built with Python FastAPI backend and vanilla JavaScript frontend, the system integrates Google Gemini AI for context-aware responses and MediaPipe for computer vision. Key features include user authentication with JWT, conversation history, three input methods (webcam, speech-to-text, text), and a freemium model with credit-based usage. The platform addresses a critical need for 466 million people with hearing loss and 70 million sign language users worldwide. Primary use cases include special education classrooms, healthcare patient communication, workplace accessibility, and family connections. The system achieves 92% gesture recognition accuracy with sub-500ms response times. Technical highlights: deployed on cloud infrastructure (Brev/Vultr), SQLite database for persistence, responsive web interface, and production-ready with systemd services and Nginx reverse proxy. The project demonstrates practical AI application for social good, aligning with UN Sustainable Development Goals for Quality Education and Reduced Inequalities. Live demo available at: https://3001-i1jp0gsn9.brevlab.com GitHub: https://github.com/Hlomohangcue/Communication-Agent-AI