Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

NVIDIA

NVIDIA Corporation is a global leader in accelerated computing, specializing in the design of graphics processing units (GPUs) for the gaming, professional visualization, data center, and automotive markets. As a pioneer in parallel computing, NVIDIA has been instrumental in the advancement of artificial intelligence, providing the foundational hardware and software platforms that drive modern AI research and deployment.

General
AuthorNVIDIA Corporation
Release Date1993
Websitehttps://www.nvidia.com/
Documentationhttps://docs.nvidia.com/
Technology TypeHardware / AI

Key Products and Technologies

  • GPUs (Graphics Processing Units): High-performance processors essential for parallel computing tasks in AI, machine learning, and deep learning.
  • CUDA Platform: A parallel computing platform and programming model that enables significant performance gains by harnessing the power of GPUs.
  • NVIDIA AI Software Suites: Comprehensive collections of tools and frameworks, such as NVIDIA NeMo for large language model development and deployment, and NVIDIA TensorRT for high-performance deep learning inference.
  • NVIDIA Jetson: Edge AI platform for autonomous machines, robotics, and embedded systems.
  • NVIDIA Omniverse: A platform for 3D design collaboration and simulation, facilitating the development of virtual worlds and digital twins.

Start Building with NVIDIA

NVIDIA's ecosystem of hardware and software is critical for accelerating AI development and deploying high-performance computing solutions. From data centers to edge devices, NVIDIA technology powers a vast array of AI applications, including agent lifecycle management with tools like NeMo. Developers are encouraged to explore the extensive documentation and resources available to leverage NVIDIA's capabilities for their projects.

šŸ‘‰ NVIDIA Developer Program šŸ‘‰ NVIDIA AI Platform Overview

NVIDIA AI Technologies Hackathon projects

Discover innovative solutions crafted with NVIDIA AI Technologies, developed by our community members during our engaging hackathons.

Adaptifleet

Adaptifleet

Traditional warehouse automation has improved efficiency, yet many systems remain rigid, expensive, and difficult to adapt when workflows or layouts change. Even small adjustments often require specialized expertise or time-consuming reprogramming. This creates a disconnect between what operators need robots to do and how easily they can communicate those needs — a challenge we call the ā€œHuman Intent Gap.ā€ AdaptiFleet was designed to close this gap by enabling intuitive, AI-driven fleet control. Instead of relying on complex interfaces or predefined scripts, users interact with autonomous robots using natural language. Commands such as ā€œGet me three bags of chips and a cold drinkā€ are interpreted and translated into structured robotic tasks automatically. At its core, AdaptiFleet leverages Gemini-powered Vision Language Models (VLMs) to understand user intent and visual context. Robots operate within a dynamic decision framework, allowing them to adapt to changing environments rather than follow rigid, pre-programmed routes. The platform integrates a digital twin simulation stack built on Isaac Sim, enabling teams to validate behaviors, test workflows, and optimize multi-robot coordination before live deployment. Once deployed, ROS2 and Nav2 provide robust navigation, dynamic path planning, and collision avoidance. The VLM orchestration layer continuously analyzes visual inputs to support scene understanding, anomaly detection, and proactive hazard awareness. When conditions change, AdaptiFleet autonomously re-plans routes and tasks, reducing downtime and operational disruption. By combining conversational interaction, real autonomy, and simulation-driven validation, AdaptiFleet simplifies robotic deployments while improving efficiency and visibility. The result is an automation system that is adaptive, scalable, and aligned with how people naturally work.

CarphaCom - Robotised E-commerce

CarphaCom - Robotised E-commerce

CarphaCom is a revolutionary "Shop-to-Ship" platform that bridges the gap between digital sales and physical logistics. Built for the 2026 enterprise landscape, it uses Medusa.js 2.0 and Next.js 14 to provide a lightning-fast, multi tiers pricing B2B/B2C experience. The core innovation is the seamless integration with NVIDIA Isaac Sim hosted and Vultr infrastructure. When a customer places an order on Frontstore, a real-time event triggers an autonomous robot in our Digital Twin warehouse wich will fulfill the order by picking from the Shelfs (Categories) the product, will take the products to Packing zone then to the Courier zone to be picked up and delivered to customers, the robot will automatically mark the order in Medusa as completed and put the Couriers Tracking number, and automatically will be generated a PDF/XML invoice. Using Gemini AI, users can also interact with the fleet via voice using the Tab AI from Warehouse and push MIC and speak then push to MIC to end, the integrated Gemini will take the command and give to the robots which will execute commands, or chat to manage inventory, the inventory are the categories and products from the Frontstore/Medusa. The robot autonomously picks, packs, and moves the product to the courier station. Once the physical task is completed, the robot updates the digital store status to "Ready for Pickup" automatically. CarphaCom also includes advanced business tools: Professional Dashboard, Google Merchant Center, Analitycs, Console sync, Google Login, AI-powered autoblogging using Gemini model, CMS, B2B API SYNC, XML/CSV bulkt guided importation, Integrated PayU, proxy+contact scrapping, google maps api for contacts, and sms+email campaigns. It is a complete, scalable, and autonomous commerce solution designed to eliminate human error and maximize efficiency. Without the warehouse robotised integration the E-commerce it is at 75% of being complete needing hardening, testing and releasing a stable version.

RAKSHAK - Autonomy Evaluation Framework

RAKSHAK - Autonomy Evaluation Framework

RAKSHAK is an evaluation and benchmarking framework that validates autonomous robots before real-world deployment. As autonomous systems enter disaster zones, hospitals, warehouses, food and medicine delivery networks, agricultural pesticide spraying, and public infrastructure, failures are no longer minor bugs — they can result in injuries, recalls, lawsuits, and lost trust. Most autonomy failures occur in edge-case conditions not covered by standard testing. Field validation can cost $50K–$500K per failure iteration. RAKSHAK exposes these failures safely in simulation before deployment risk exists. Built on top of Webots for real-time 3D simulation, RAKSHAK transforms simulation into adversarial validation infrastructure. Instead of testing robots under ideal conditions, it injects 50+ structured chaos scenarios including battery degradation, sensor blackouts, communication loss, environmental hazards, network latency, and multi-agent conflicts derived from real-world robotics failure modes. The platform integrates LLM-driven autonomy using the Gemini API and runs cloud-deployed simulations on Vultr infrastructure with WebSocket-based real-time telemetry. It performs live stress injection and generates a quantified Trust Score (0–100) across safety, resilience, efficiency, communication reliability, and task completion. Example: A delivery drone carrying food or emergency medicine passes obstacle avoidance tests but crashes when battery drops below 20% during evasive maneuvers. RAKSHAK’s structured power-drop scenario exposes this weakness before first flight — preventing potential six-figure losses in hardware, liability, operational downtime, and public trust. This is not just simulation. This is measurable deployment readiness. As autonomous systems scale globally, validation must scale with them. RAKSHAK ensures robots are trusted before they are deployed.