Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

NVIDIA

NVIDIA Corporation is a global leader in accelerated computing, specializing in the design of graphics processing units (GPUs) for the gaming, professional visualization, data center, and automotive markets. As a pioneer in parallel computing, NVIDIA has been instrumental in the advancement of artificial intelligence, providing the foundational hardware and software platforms that drive modern AI research and deployment.

General
AuthorNVIDIA Corporation
Release Date1993
Websitehttps://www.nvidia.com/
Documentationhttps://docs.nvidia.com/
Technology TypeHardware / AI

Key Products and Technologies

  • GPUs (Graphics Processing Units): High-performance processors essential for parallel computing tasks in AI, machine learning, and deep learning.
  • CUDA Platform: A parallel computing platform and programming model that enables significant performance gains by harnessing the power of GPUs.
  • NVIDIA AI Software Suites: Comprehensive collections of tools and frameworks, such as NVIDIA NeMo for large language model development and deployment, and NVIDIA TensorRT for high-performance deep learning inference.
  • NVIDIA Jetson: Edge AI platform for autonomous machines, robotics, and embedded systems.
  • NVIDIA Omniverse: A platform for 3D design collaboration and simulation, facilitating the development of virtual worlds and digital twins.

Start Building with NVIDIA

NVIDIA's ecosystem of hardware and software is critical for accelerating AI development and deploying high-performance computing solutions. From data centers to edge devices, NVIDIA technology powers a vast array of AI applications, including agent lifecycle management with tools like NeMo. Developers are encouraged to explore the extensive documentation and resources available to leverage NVIDIA's capabilities for their projects.

👉 NVIDIA Developer Program 👉 NVIDIA AI Platform Overview

NVIDIA AI Technologies Hackathon projects

Discover innovative solutions crafted with NVIDIA AI Technologies, developed by our community members during our engaging hackathons.

ARIA - Autonomous Report Intelligence Analyst

ARIA - Autonomous Report Intelligence Analyst

Activity reports contain valuable information. Extracting it, connecting the dots across sources, and turning raw data into decisions takes time most teams don't have. ARIA was built to do exactly that. ARIA is an AI agent specialized in activity report analysis. Its role is not to generate reports — it is to read them, understand them, and tell you what they mean. Submit your existing reports in any format (CSV, Excel, PDF, JSON, databases, APIs) and ARIA identifies the business domain, locates the relevant KPIs, cross-validates data across sources, and produces structured insights grounded in your actual data. What sets ARIA apart ARIA adapts to your domain automatically — HR, finance, R&D, logistics, IT — calibrating its KPIs and analysis angle without configuration. When it encounters a file format it cannot handle, it builds the missing extraction tool itself. When it lacks domain knowledge, it enriches its own context before proceeding. Its analytical engine applies TRIZ methodology to go beyond trends: it identifies structural contradictions in your data, derives root causes, and produces prioritized recommendations with an explicit owner, timeline, and priority level. Results are delivered with charts and visualizations generated directly from your reports, exportable in JSON, Markdown, HTML, PDF, and PowerPoint. ARIA never fills a data gap with an assumption. Every finding is traceable, every confidence score is explicit. ARIA does not write your reports. It finally makes them worth reading.

GeminiFleet

GeminiFleet

## What it does GeminiFleet runs a physics-based warehouse simulation where autonomous robots pick up and deliver items. A fleet manager controls robot behavior through natural language — no code, no config files. **Example commands:** - "Make robots more cautious" → speed drops, safety margins increase - "Speed things up, we're behind schedule" → max speed, tighter margins - "Focus on the north side" → robots prioritize north-zone tasks Google Gemini interprets each command with full context (fleet status, delivery counts, collision stats) and generates precise parameter updates that modify robot behavior in real-time. ## How it works **PyBullet Physics Engine** — Real rigid-body simulation with collision detection. Warehouse environment with walls, shelves, pickup/dropoff zones, and 4-6 autonomous robots navigating with priority-based collision avoidance. **Gemini 2.0 Flash Policy Engine** — Translates natural language into 7 tunable parameters: speed, safety margin, congestion response, task selection strategy, cooperation mode, zone preference, and concurrency. Values are clamped to safe ranges. **Live Web Dashboard** — Real-time 2D visualization via WebSocket at 10Hz. Tracks robot positions, planned paths, carrying status, and delivery statistics. Collapsible panels for robot status and active policy display. ## Key Innovation Robot fleet behavior is parameterized into meaningful dimensions that an LLM can reliably map from ambiguous human instructions. Operational expertise — not programming skill — drives fleet optimization. ## Deployment Runs entirely on **Vultr non-GPU VMs** via Docker. PyBullet operates in CPU-only mode. Single `docker compose up` deploys the full simulation + dashboard + Gemini chat. ## Built with - **PyBullet** — Bullet Physics simulation - **Google Gemini 2.0 Flash** — NL→policy translation - **FastAPI + WebSocket** — Real-time state streaming - **Docker** — Vultr deployment

Tattle Turtle

Tattle Turtle

"A second grader gets pushed at recess. She doesn't tell her teacher — she's embarrassed. She doesn't tell her parents — she doesn't want to worry them. By the time an adult notices, it's been three weeks. This happens in every school, every day. Tattle Turtle exists so no kid carries that alone. Tammy the Tattle Turtle is an AI emotional support companion running on a simulated Reachy Mini robot. Students walk up and talk to Tammy through voice. She listens, validates, and asks one gentle question at a time — max 15 words, non-leading language, strict boundaries between emotional triage and treatment. What makes Tattle Turtle different is what happens beneath the conversation. Every exchange is classified in real time into GREEN, YELLOW, or RED urgency. A bad grade vent stays GREEN — private. Recess exclusion mentioned three times this week? YELLOW — a pattern surfaces on the teacher dashboard that no human could track across 25 students. A student mentions being hit? Immediate RED alert — timestamp, summary, and next steps pushed to the teacher. The system comes to them when it matters. We built this on three sponsor technologies. Google DeepMind's Gemini API powers the conversational engine with structured JSON for severity and emotion tags. Reachy Mini's SDK provides robot simulation through MuJoCo with expressive head movements and audio I/O. Hugging Face Spaces serves as the deployment layer — one-click installable on any Reachy Mini in any classroom. Tammy's prompt engineering uses a layered 5-step framework ensuring she never crosses clinical boundaries, never suggests emotions to students, and never stores identifiable data. Privacy isn't a feature — it's a constraint baked into every layer. Tattle Turtle fills the gap between a child's worst moment and an adult's awareness. One robot. Every classroom. No kid left unheard."