Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Claude Code

Claude Code is an advanced command-line interface (CLI) tool developed by Anthropic, designed to empower its AI model, Claude, with direct code interaction capabilities. This tool allows developers to leverage Claude for agentic coding tasks, including refactoring, debugging, and managing code within the terminal environment. It integrates Claude's powerful language understanding with practical development workflows, bringing AI assistance directly to the codebase.

General
AuthorAnthropic
Release Date2024
Websitehttps://code.claude.com/
Documentationhttps://code.claude.com/docs/en/overview
Technology TypeAI Coding Assistant

Key Features

  • Agentic Coding: Enables Claude to perform complex coding tasks autonomously, guided by natural language instructions.
  • Terminal Integration: Works directly within the command line, providing a seamless experience for developers.
  • Code Refactoring: Assists in improving code quality, structure, and efficiency.
  • Debugging Support: Helps identify and resolve issues in the codebase.
  • Code Management: Facilitates various code-related operations, enhancing developer productivity.
  • Natural Language Interaction: Developers can interact with Claude using plain language prompts for coding tasks.

Start Building with Claude Code

Claude Code offers a powerful way to integrate Anthropic's Claude AI directly into your coding workflow. By providing agentic capabilities from the terminal, it streamlines refactoring, debugging, and general code management. Developers can leverage this tool to accelerate development, improve code quality, and benefit from AI assistance in real-time.

👉 Claude Code CLI Guide 👉 Claude Code Quickstart

Anthropic Claude Code AI technology Hackathon projects

Discover innovative solutions crafted with Anthropic Claude Code AI technology, developed by our community members during our engaging hackathons.

WritenDraw Flight Simulator for Software Developer

WritenDraw Flight Simulator for Software Developer

WritenDraw is an agentic AI simulation platform that puts junior developers through realistic production incidents to bridge the gap between learning to code and working in a real team. The core innovation is the agentic workflow: Google Gemini 2.0 Flash orchestrates the entire simulation through three autonomous agents: AGENTIC EVALUATION: Every step requires free-text responses (no multiple choice). Gemini evaluates each response against per-step rubrics, scoring reasoning 0-15. The AI adapts feedback based on accumulated performance. AGENTIC MENTORING: The AI mentor maintains persistent context, tracking understanding level, chat count, and time pressure. Early messages: patient, asks "what do you think?" By message 7+: "just write it up." The agent autonomously decides how much help to give. AGENTIC AUDIT: The system logs every response, chat message, code submission, and score — creating a complete picture of how a developer thinks through a crisis. The AI continuously assesses and adapts. The simulation drops you into a P1 incident at ShopRight (fictional UK supermarket). You join a standup, read a Jira ticket, investigate messy code with no hints, chat with the AI mentor, write a fix, respond to code review, create a deployment plan, and contribute to a retro. Paste is disabled — Key insight: explanation scores higher than code (10 vs 5 points). Wrong code with a great explanation beats perfect code with no explanation — because in real teams, communication matters as much as code. Built on the author's published research — "TrueSkills: AI-Resistant Assessment Through Personalized Understanding Validation" (SSRN, 2025, DOI: 10.2139/ssrn.5674130) — which demonstrated that AI-resistant assessment requires evaluating understanding rather than recall. WritenDraw takes this further: testing how developers think under realistic production pressure. Built with Python/Flask, Google Gemini 2.0 Flash, CodeMirror, Pyodide, and Docker.

OpsTwin AI

OpsTwin AI

OpsTwin AI is a simulation-first autonomous warehouse control system designed to model and optimize multi-robot fulfillment operations. As warehouses adopt robotics at scale, fleet coordination becomes increasingly complex. Congestion, battery constraints, task prioritization, and workload balancing impact throughput and efficiency. OpsTwin AI addresses this by creating a digital twin of warehouse operations, a live simulation where robotic workflows can be orchestrated, tested, and optimized before real-world deployment. In OpsTwin AI, robots operate within a simulated warehouse grid containing storage racks, charging stations, and pack zones. When a new order is created, the system autonomously determines which robot should fulfill it. Instead of relying on hardcoded rules, I use Gemini as a strategic planning layer. The backend sends live fleet state, including robot positions, battery levels, and active tasks, to Gemini. Gemini returns structured JSON with a selected robot and step-by-step task sequence. This allows deterministic execution while enabling adaptive multi factor decision making. The Vultr-hosted backend serves as the centralized system of record. It maintains robot state, order queues, and operational metrics, and broadcasts real-time updates to a web dashboard using WebSockets. A 500 millisecond simulation loop executes plans, updates robot movement, tracks congestion events, and manages battery-aware charging. The result is fully autonomous multi-robot operation without manual intervention. From a business perspective, OpsTwin AI functions as an operational control tower for robotic fleets, enabling teams to simulate workflows, evaluate performance, and reduce deployment risk before scaling to physical infrastructure. By separating AI planning from deterministic execution, the architecture mirrors real world robotics systems and provides a clear path from simulation to real-world deployment.

DroneOS - AI-Powered Autonomous Fleet Dispatch

DroneOS - AI-Powered Autonomous Fleet Dispatch

DroneOS is an autonomous drone control framework built on PX4 Autopilot and ROS2. At its core is drone_core, a custom C++ SDK that exposes high-level flight control as ROS2 services — arm, takeoff, position commands, land. An OpenClaw AI agent runs on a Vultr VPS and acts as the fleet dispatcher. When emergency incidents come in through the dispatch service, they're routed to the agent via a bridge over WebSocket. The agent evaluates incident priority, checks drone availability and location, then sends flight commands through ROS2 to dispatch drones autonomously. The architecture is two servers connected over Tailscale VPN. The Vultr VPS runs the OpenClaw gateway, dispatch service, communication bridge, and React frontend. A separate simulation server runs PX4 SITL with Gazebo, dual drone_core nodes, rosbridge, and camera feeds. This is the same split you'd have in production — cloud command center talking to drones over VPN, except the drones are simulated. The frontend is a real-time dashboard connected to rosbridge over WebSocket. It shows the incident queue with priority levels, a map with drone positions, live camera feeds from both drones with picture-in-picture toggle, and an AI activity log showing every decision the agent makes. Operators see what the AI is doing and can override with natural language commands through the same OpenClaw agent. The dispatch service simulates a 911 CAD system generating incidents — medical emergencies, fires, property damage — each with priority levels and coordinates. The AI doesn't follow scripts. It decides which drone to send based on priority, proximity, and availability. The framework supports real hardware. Production Docker configs exist for Raspberry Pi companion computers communicating with Pixhawk flight controllers over serial. The simulation runs the same software stack. Live demo: http://207.148.9.142:3000 Source: https://github.com/ortegarod/drone-os

RoboGripAI

RoboGripAI

This project presents a simulation-first robotic system designed to perform structured physical tasks through reliable interaction with objects and its environment. The system focuses on practical task execution rather than complex physics modeling, ensuring repeatability, robustness, and measurable performance across varied simulated conditions. Simulation-first robotic system performing structured physical tasks such as pick-and-place, sorting, and simple assembly. Designed for repeatable execution under varied conditions, with basic failure handling, environmental interaction, and measurable performance metrics. A key emphasis of the system is reliability under dynamic conditions. The simulation introduces variations such as object position changes, minor environmental disturbances, and task sequence modifications. The robot is designed to adapt to these variations while maintaining consistent task success rates. Basic failure handling mechanisms are implemented, including reattempt strategies for failed grasps, collision avoidance corrections, and task state recovery protocols. The framework incorporates structured task sequencing and state-based control logic to ensure deterministic and repeatable behavior. Performance is evaluated using clear metrics such as task completion rate, execution time, grasp accuracy, recovery success rate, and system stability across multiple trials. The modular system design allows scalability for additional tasks or integration with advanced planning algorithms. By prioritizing repeatability, robustness, and measurable outcomes, this solution demonstrates practical robotic task automation in a controlled simulated environment, aligning with real-world industrial and research use cases. Overall, the project showcases a dependable robotic manipulation framework that bridges perception, decision-making, and action in a simulation-first setting, delivering consistent and benchmark-driven task execution.