Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Webots

Webots is a professional, open-source 3D robot simulator that offers a comprehensive development environment for modeling, programming, and simulating robots. It is widely used for AI-driven control testing, allowing researchers and developers to design and evaluate robotic systems in a realistic virtual setting before deploying them in the physical world.

General
AuthorCyberbotics Ltd.
Release Date1998
Websitehttps://cyberbotics.com/
Documentationhttps://cyberbotics.com/doc/guide/index
Repositoryhttps://github.com/cyberbotics/webots
Technology Type3D robot simulator

Key Features

  • Realistic 3D Simulation: Provides accurate physics and rendering for a wide range of robot models and environments.
  • Robot Modeling: Tools for designing and importing various robot types, sensors, and actuators.
  • Programming Interface: Supports multiple programming languages (C/C++, Python, Java, MATLAB, etc.) for controlling robots.
  • AI Integration: Ideal for testing AI algorithms, machine learning, and neural networks in robotic control.
  • Open-Source: Freely available for research and development, fostering community contributions.
  • Cross-Platform: Runs on Windows, Linux, and macOS.

Start Building with Webots

Webots is an essential tool for anyone working on robotics and AI. Its ability to simulate complex robotic systems in a controlled environment allows for rapid prototyping, testing, and iteration of AI control algorithms. Explore the user guide and GitHub repository to get started with modeling your own robots and implementing AI-driven control strategies.

šŸ‘‰ Webots User Guide šŸ‘‰ Webots GitHub Repository

WEBOTS AI Technologies Hackathon projects

Discover innovative solutions crafted with WEBOTS AI Technologies, developed by our community members during our engaging hackathons.

RescueOrch - Orchestration platform for simulation

RescueOrch - Orchestration platform for simulation

RescuOrch integrates Webots R2025a simulation with DJI Mavic 2 Pro drones, TIAGo++ ground robots and the Gemini 3 Flash LLM to orchestrate multi‑agent rescue operations. Its current demo tackles a kitchen fire, but the same platform can simulate active‑shooter drills, FEMA‑style flood recovery, massive fires in dense areas like Mumbai’s Dharavi, pipeline or oil‑rig accidents and high‑magnitude earthquakes. Gemini produces real‑time plans, so drones scout hazards, ground robots execute tasks and the plan adjusts as conditions change. These simulations address real gaps: U.S. fire‑response times average 6–8 minutes in cities and exceed 10 minutes in rural areas; Mumbai’s fire brigade reports response times of ~10 minutes in the city and 20 minutes in suburbs. Globally, over 180,000 people die from burn injuries each year and 86,473 people died in disasters in 2023. Last year 89 U.S. firefighters died on duty, underscoring the dangers responders face. After earthquakes, survival drops from about 90 % in the first day to 5–10 % after 72 hours, so rapid coordination saves lives. By running ā€œwhat‑ifā€ scenarios—larger or multi‑room fires, stronger earthquakes or multiple hazards—RescuOrch helps agencies test strategies and decide if additional drones or rugged robots would improve outcomes. With 27,000 U.S. fire departments and 23 oil refineries in India, there is a broad user base for physics‑based simulation training. In short, RescuOrch offers a versatile AI‑driven testbed to help responders plan, train and procure the right equipment for complex emergencies.

RAKSHAK - Autonomy Evaluation Framework

RAKSHAK - Autonomy Evaluation Framework

RAKSHAK is an evaluation and benchmarking framework that validates autonomous robots before real-world deployment. As autonomous systems enter disaster zones, hospitals, warehouses, food and medicine delivery networks, agricultural pesticide spraying, and public infrastructure, failures are no longer minor bugs — they can result in injuries, recalls, lawsuits, and lost trust. Most autonomy failures occur in edge-case conditions not covered by standard testing. Field validation can cost $50K–$500K per failure iteration. RAKSHAK exposes these failures safely in simulation before deployment risk exists. Built on top of Webots for real-time 3D simulation, RAKSHAK transforms simulation into adversarial validation infrastructure. Instead of testing robots under ideal conditions, it injects 50+ structured chaos scenarios including battery degradation, sensor blackouts, communication loss, environmental hazards, network latency, and multi-agent conflicts derived from real-world robotics failure modes. The platform integrates LLM-driven autonomy using the Gemini API and runs cloud-deployed simulations on Vultr infrastructure with WebSocket-based real-time telemetry. It performs live stress injection and generates a quantified Trust Score (0–100) across safety, resilience, efficiency, communication reliability, and task completion. Example: A delivery drone carrying food or emergency medicine passes obstacle avoidance tests but crashes when battery drops below 20% during evasive maneuvers. RAKSHAK’s structured power-drop scenario exposes this weakness before first flight — preventing potential six-figure losses in hardware, liability, operational downtime, and public trust. This is not just simulation. This is measurable deployment readiness. As autonomous systems scale globally, validation must scale with them. RAKSHAK ensures robots are trusted before they are deployed.

RoboGripAI

RoboGripAI

This project presents a simulation-first robotic system designed to perform structured physical tasks through reliable interaction with objects and its environment. The system focuses on practical task execution rather than complex physics modeling, ensuring repeatability, robustness, and measurable performance across varied simulated conditions. Simulation-first robotic system performing structured physical tasks such as pick-and-place, sorting, and simple assembly. Designed for repeatable execution under varied conditions, with basic failure handling, environmental interaction, and measurable performance metrics. A key emphasis of the system is reliability under dynamic conditions. The simulation introduces variations such as object position changes, minor environmental disturbances, and task sequence modifications. The robot is designed to adapt to these variations while maintaining consistent task success rates. Basic failure handling mechanisms are implemented, including reattempt strategies for failed grasps, collision avoidance corrections, and task state recovery protocols. The framework incorporates structured task sequencing and state-based control logic to ensure deterministic and repeatable behavior. Performance is evaluated using clear metrics such as task completion rate, execution time, grasp accuracy, recovery success rate, and system stability across multiple trials. The modular system design allows scalability for additional tasks or integration with advanced planning algorithms. By prioritizing repeatability, robustness, and measurable outcomes, this solution demonstrates practical robotic task automation in a controlled simulated environment, aligning with real-world industrial and research use cases. Overall, the project showcases a dependable robotic manipulation framework that bridges perception, decision-making, and action in a simulation-first setting, delivering consistent and benchmark-driven task execution.

ARC SPATIAL - Autonomous Robotics Control

ARC SPATIAL - Autonomous Robotics Control

ARC SPATIAL – Digital Twin & Robot Fleet Simulator Key Features 3D robot arm pick-and-place simulation (physics + auto mode) Upload floor plan (PDF/image) → AI (Gemini Vision) auto-detects zones Real-time 2D fleet simulation: LiDAR, A* pathfinding, AI auto-scheduling Zone setup: draw manually or AI auto-detect, synced across tabs Natural language task planner powered by Google Gemini 2.0 Flash Analytics dashboard: throughput, robot utilization, battery levels + charts Live overview: fleet status, activity feed, quick controls How It Works (Quick Flow) Create digital twin: upload floor plan → Gemini AI auto-detects zones Deploy virtual fleet: choose from 4 robot types (Mobile Manipulator, Forklift, Transport, etc.) Assign tasks: use plain English commands or manual input → AI optimizes based on battery, distance, robot skills Watch live: see pick-and-place, sorting, assembly in 3D or 2D view (WebSocket real-time updates) Analyze & improve: check performance metrics (throughput, utilization, completion) → optimize before real deployment Use Case 1 – Construction Site Material Handling
Problem: 45+ materials scattered, 4 robot types needed, manual coordination = delays & mistakes
Solution with ARC SPATIAL: Upload site plan → AI detects zones Deploy 4 virtual robots AI schedules material transport tasks Monitor live in 3D/2D → test & refine
Results: 60% faster material handling 99% placement accuracy Zero real-world risk (fully tested in simulation first) Use Case 2 – New Warehouse Automation Planning
Problem: Unsure how many robots or best layout, risking $500K+ investment
Solution with ARC SPATIAL: Upload warehouse floor plan Define zones (storage, staging, shipping) Simulate different fleet sizes & positions AI optimizes task assignments & validates throughput
Results: Confirmed exact fleet needs & ideal layout Before spending money Clear, confident ROI projection from simulation data