Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Kiro

Kiro is an AWS-powered agentic coding service designed to revolutionize software development through "spec-driven development." It leverages artificial intelligence to interpret natural language prompts and automatically generate code and tests, significantly accelerating the development lifecycle. Kiro aims to reduce manual coding efforts and improve code quality by ensuring that applications adhere closely to their specifications.

General
AuthorAWS
Release Date2025
Websitehttps://aws.amazon.com/
Documentationhttps://aws.amazon.com/documentation-overview/kiro/
Technology TypeAgentic IDE

Key Features

  • Spec-Driven Development: Translates high-level natural language specifications into functional code and comprehensive tests.
  • AI-Powered Code Generation: Utilizes advanced AI models to write code automatically, reducing development time.
  • Automated Test Creation: Generates relevant test cases alongside code, ensuring immediate validation and higher quality.
  • AWS Integration: Seamlessly integrates with the broader AWS ecosystem, leveraging cloud infrastructure for scalable development.
  • Agentic Workflow: Employs AI agents to manage and execute development tasks, from planning to implementation.

Start Building with Kiro

Kiro offers an innovative approach to software development, allowing teams to rapidly build and test applications by focusing on specifications rather than intricate coding details. As an AWS-powered service, it provides the scalability and reliability expected from a leading cloud provider. Developers interested in leveraging AI for accelerated and more reliable coding should explore Kiro's capabilities.

👉 Kiro Documentation on AWS 👉 Explore AWS AI/ML Services

AWS kiro AI technology Hackathon projects

Discover innovative solutions crafted with AWS kiro AI technology, developed by our community members during our engaging hackathons.

Communication Bridge AI

Communication Bridge AI

Communication Bridge AI is an intelligent platform that breaks down communication barriers between verbal and non-verbal individuals using cutting-edge AI technology. The system features real-time gesture recognition powered by MediaPipe, supporting 18 hand gestures including ASL signs like "I Love You," "Thank You," and "Help." Our multi-agent AI architecture includes Intent Detection, Gesture Interpretation, Speech Generation, and Context Learning agents coordinated by a central orchestrator. The platform offers bidirectional communication: verbal users can speak or type messages that are translated into gesture emojis, while non-verbal users can make hand gestures captured via webcam that are interpreted and converted to natural language responses. Built with Python FastAPI backend and vanilla JavaScript frontend, the system integrates Google Gemini AI for context-aware responses and MediaPipe for computer vision. Key features include user authentication with JWT, conversation history, three input methods (webcam, speech-to-text, text), and a freemium model with credit-based usage. The platform addresses a critical need for 466 million people with hearing loss and 70 million sign language users worldwide. Primary use cases include special education classrooms, healthcare patient communication, workplace accessibility, and family connections. The system achieves 92% gesture recognition accuracy with sub-500ms response times. Technical highlights: deployed on cloud infrastructure (Brev/Vultr), SQLite database for persistence, responsive web interface, and production-ready with systemd services and Nginx reverse proxy. The project demonstrates practical AI application for social good, aligning with UN Sustainable Development Goals for Quality Education and Reduced Inequalities. Live demo available at: https://3001-i1jp0gsn9.brevlab.com GitHub: https://github.com/Hlomohangcue/Communication-Agent-AI

RoboGripAI

RoboGripAI

This project presents a simulation-first robotic system designed to perform structured physical tasks through reliable interaction with objects and its environment. The system focuses on practical task execution rather than complex physics modeling, ensuring repeatability, robustness, and measurable performance across varied simulated conditions. Simulation-first robotic system performing structured physical tasks such as pick-and-place, sorting, and simple assembly. Designed for repeatable execution under varied conditions, with basic failure handling, environmental interaction, and measurable performance metrics. A key emphasis of the system is reliability under dynamic conditions. The simulation introduces variations such as object position changes, minor environmental disturbances, and task sequence modifications. The robot is designed to adapt to these variations while maintaining consistent task success rates. Basic failure handling mechanisms are implemented, including reattempt strategies for failed grasps, collision avoidance corrections, and task state recovery protocols. The framework incorporates structured task sequencing and state-based control logic to ensure deterministic and repeatable behavior. Performance is evaluated using clear metrics such as task completion rate, execution time, grasp accuracy, recovery success rate, and system stability across multiple trials. The modular system design allows scalability for additional tasks or integration with advanced planning algorithms. By prioritizing repeatability, robustness, and measurable outcomes, this solution demonstrates practical robotic task automation in a controlled simulated environment, aligning with real-world industrial and research use cases. Overall, the project showcases a dependable robotic manipulation framework that bridges perception, decision-making, and action in a simulation-first setting, delivering consistent and benchmark-driven task execution.