Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Google DeepMind

Google DeepMind is a world-renowned artificial intelligence research laboratory, formed from the merger of DeepMind and Google's AI division. It stands at the forefront of AI innovation, responsible for groundbreaking advancements, including the development of the Gemini series of multimodal AI models and the Gemma open-model family. DeepMind's mission is to solve intelligence to advance science and benefit humanity.

General
AuthorGoogle DeepMind
Release Date2010 (DeepMind founding)
Websitehttps://deepmind.google/
Technology TypeAI Research Organization

Key Research Areas and Achievements

  • Reinforcement Learning: Pioneering work in reinforcement learning, including AlphaGo, which defeated world champions in Go.
  • Large Language Models: Development of advanced LLMs, contributing to the Gemini and Gemma model families.
  • Scientific Discovery: Application of AI to accelerate scientific research, such as AlphaFold for protein structure prediction.
  • Safety and Ethics: Dedicated research into AI safety, ethics, and responsible deployment.

Start Exploring Google DeepMind

Google DeepMind's research underpins many of the most advanced AI systems in the world. As the organization behind foundational models like Gemini and Gemma, its work is crucial for understanding the future of AI. Developers and researchers can delve into their publications and open-source contributions to gain insights into cutting-edge AI development.

👉 DeepMind Research Publications 👉 About Google DeepMind

Google deepmind AI technology Hackathon projects

Discover innovative solutions crafted with Google deepmind AI technology, developed by our community members during our engaging hackathons.

Tattle Turtle

Tattle Turtle

"A second grader gets pushed at recess. She doesn't tell her teacher — she's embarrassed. She doesn't tell her parents — she doesn't want to worry them. By the time an adult notices, it's been three weeks. This happens in every school, every day. Tattle Turtle exists so no kid carries that alone. Tammy the Tattle Turtle is an AI emotional support companion running on a simulated Reachy Mini robot. Students walk up and talk to Tammy through voice. She listens, validates, and asks one gentle question at a time — max 15 words, non-leading language, strict boundaries between emotional triage and treatment. What makes Tattle Turtle different is what happens beneath the conversation. Every exchange is classified in real time into GREEN, YELLOW, or RED urgency. A bad grade vent stays GREEN — private. Recess exclusion mentioned three times this week? YELLOW — a pattern surfaces on the teacher dashboard that no human could track across 25 students. A student mentions being hit? Immediate RED alert — timestamp, summary, and next steps pushed to the teacher. The system comes to them when it matters. We built this on three sponsor technologies. Google DeepMind's Gemini API powers the conversational engine with structured JSON for severity and emotion tags. Reachy Mini's SDK provides robot simulation through MuJoCo with expressive head movements and audio I/O. Hugging Face Spaces serves as the deployment layer — one-click installable on any Reachy Mini in any classroom. Tammy's prompt engineering uses a layered 5-step framework ensuring she never crosses clinical boundaries, never suggests emotions to students, and never stores identifiable data. Privacy isn't a feature — it's a constraint baked into every layer. Tattle Turtle fills the gap between a child's worst moment and an adult's awareness. One robot. Every classroom. No kid left unheard."

ClutterBot

ClutterBot

ClutterBot is a proof-of-concept simulation platform that bridges natural language understanding and robotic task execution for household cleanup tasks. Users issue commands like "pick up the phone and the toy train," which Gemini 3 Flash parses into structured task lists. The system generates complete execution plans upfront, with Gemini deciding the sequence of pick-and-place operations for each object. The architecture combines a FastAPI backend hosted on Vultr (central system of record), a Next.js frontend for real-time monitoring, and a MuJoCo physics simulation featuring a Franka FR3 manipulator in a room environment with everyday objects. The robot executes inverse kinematics motions to relocate objects from scattered positions on a table to a collection bin, with each action streamed via WebSocket for live visualization. This prototype validates the feasibility of integrating large language models with robotic simulation pipelines, demonstrating how AI can translate high-level human intent into executable robot behaviors. While the current implementation uses deterministic motion planning with hardcoded inverse kinematics rather than learned policies, the framework establishes foundational patterns for future work incorporating adaptive control, real hardware integration, and expanded object manipulation capabilities. The plan-first approach (Gemini generates the full task plan in a single API call) shows AI reasoning while keeping execution fast and deterministic, making it suitable for real-time interactive use.

Adaptifleet

Adaptifleet

Traditional warehouse automation has improved efficiency, yet many systems remain rigid, expensive, and difficult to adapt when workflows or layouts change. Even small adjustments often require specialized expertise or time-consuming reprogramming. This creates a disconnect between what operators need robots to do and how easily they can communicate those needs — a challenge we call the “Human Intent Gap.” AdaptiFleet was designed to close this gap by enabling intuitive, AI-driven fleet control. Instead of relying on complex interfaces or predefined scripts, users interact with autonomous robots using natural language. Commands such as “Get me three bags of chips and a cold drink” are interpreted and translated into structured robotic tasks automatically. At its core, AdaptiFleet leverages Gemini-powered Vision Language Models (VLMs) to understand user intent and visual context. Robots operate within a dynamic decision framework, allowing them to adapt to changing environments rather than follow rigid, pre-programmed routes. The platform integrates a digital twin simulation stack built on Isaac Sim, enabling teams to validate behaviors, test workflows, and optimize multi-robot coordination before live deployment. Once deployed, ROS2 and Nav2 provide robust navigation, dynamic path planning, and collision avoidance. The VLM orchestration layer continuously analyzes visual inputs to support scene understanding, anomaly detection, and proactive hazard awareness. When conditions change, AdaptiFleet autonomously re-plans routes and tasks, reducing downtime and operational disruption. By combining conversational interaction, real autonomy, and simulation-driven validation, AdaptiFleet simplifies robotic deployments while improving efficiency and visibility. The result is an automation system that is adaptive, scalable, and aligned with how people naturally work.