Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Claude Code

Claude Code is an advanced command-line interface (CLI) tool developed by Anthropic, designed to empower its AI model, Claude, with direct code interaction capabilities. This tool allows developers to leverage Claude for agentic coding tasks, including refactoring, debugging, and managing code within the terminal environment. It integrates Claude's powerful language understanding with practical development workflows, bringing AI assistance directly to the codebase.

General
AuthorAnthropic
Release Date2024
Websitehttps://code.claude.com/
Documentationhttps://code.claude.com/docs/en/overview
Technology TypeAI Coding Assistant

Key Features

  • Agentic Coding: Enables Claude to perform complex coding tasks autonomously, guided by natural language instructions.
  • Terminal Integration: Works directly within the command line, providing a seamless experience for developers.
  • Code Refactoring: Assists in improving code quality, structure, and efficiency.
  • Debugging Support: Helps identify and resolve issues in the codebase.
  • Code Management: Facilitates various code-related operations, enhancing developer productivity.
  • Natural Language Interaction: Developers can interact with Claude using plain language prompts for coding tasks.

Start Building with Claude Code

Claude Code offers a powerful way to integrate Anthropic's Claude AI directly into your coding workflow. By providing agentic capabilities from the terminal, it streamlines refactoring, debugging, and general code management. Developers can leverage this tool to accelerate development, improve code quality, and benefit from AI assistance in real-time.

👉 Claude Code CLI Guide 👉 Claude Code Quickstart

Anthropic Claude Code AI technology Hackathon projects

Discover innovative solutions crafted with Anthropic Claude Code AI technology, developed by our community members during our engaging hackathons.

Thor v2 — RAG-Free Fitness Intelligence

Thor v2 — RAG-Free Fitness Intelligence

Thor v2 is a domain-expert fitness AI built on a single fine-tuned Qwen3-8B model trained on 7,118 carefully constructed instruction-response pairs spanning exercise science, nutrition, programming, injury screening, and population-specific guidance. Unlike RAG-based fitness apps that retrieve documents at query time, Thor v2 encodes knowledge directly into model weights during supervised fine-tuning on AMD MI300X hardware using ROCm. Evidence is referenced through compact citation keys — e.g. [CITE:NSCA_HYPERTROPHY_VOLUME] — that the model emits inline. A lightweight citation resolver validates these keys against a locked registry and surfaces the source document on demand. If the model emits an unknown key, it is rejected at runtime. Hallucinated citations are structurally impossible. The dataset covers 113 unique citation keys from 9 authoritative organisations — NSCA, ACSM, ISSN, NASM, HHS, USDA, NIH, CDC, and ExRx — with 80 exercise technique entries and 14 population profiles including senior, postpartum, teen, vegan, rehab return, and competitive athlete. Six conversational style variants (casual, research-nerd, anxious, skeptical, verbose, follow-up-first) are baked into training so the model adapts tone naturally without prompt engineering. Training results: 100% JSON contract pass rate across all eval prompts. Coach gating behavior confirmed — model asks clarifying questions before prescribing when context is missing, rather than giving generic advice. All responses emit valid citation_keys, follow_up_questions, and safety_notes fields. Adapter size: <350MB on top of a frozen 8B base. Built entirely on AMD MI300X (192GB HBM3, ROCm 6.3) using HuggingFace PEFT + TRL. One model. No retrieval. No vector database. The model knows. The resolver proves.

messagejhnkjhkjhkjhkjh

messagejhnkjhkjhkjhkjh

The unprecedented scaling of Large Language Models (LLMs) based on the Transformer architecture has catalyzed breakthroughs across natural language understanding, complex reasoning, and gen- erative tasks. Models such as LLaMA, Mistral, and OPT, comprising tens to hundreds of billions of parameters, exhibit remarkable emergent capabilities. However, these capabilities introduce stag- gering computational and infrastructural demands. During autoregressive token generation, LLM inference is fundamentally memory-bandwidth bound. The architectural necessity to load massive, high-precision (FP32 or FP16) weight matrices from Dynamic Random Access Memory (DRAM) into Static Random Access Memory (SRAM) at every single generation step completely dominates inference latency, severely restricting deployment on consumer-grade edge devices and inflating the energy costs of cloud-based data centers. Consequently, reducing the memory footprint of LLMs without compromising their semantic and reasoning capabilities is one of the most critical imperatives in modern artificial intelligence. Post-training quantization (PTQ) has emerged as the dominant paradigm to mitigate these bottlenecks. By compressing weights from 16-bit floats to 8-bit or 4-bit integers, PTQ drastically reduces the volume of data transferred across the memory bus. However, applying uniform precision reduction across all layers of a deep Transformer frequently induces catastrophic degradation in model per- formance. Empirical analysis reveals that LLMs possess highly heterogeneous layer sensitivities; certain layers—often in the early feature-extraction phases or final vocabulary projections—are highly delicate and responsible for grammatical stability and factual recall. Conversely, intermediate feed-forward layers are often highly robust and over-parameterized. While mixed-precision quantiza- tion—assigning different bit-widths to different layers based on their specific sensitivity—theoreticallynbvjygjhbmnbmnb

Hands for AI

Hands for AI

Chrome offers an unusually deep programmatic surface to AI agents — the DevTools Protocol, the accessibility tree, JavaScript evaluation, network interception, multi-tab and multi-context isolation. For most of the web, that surface is enough to skip visual reasoning entirely from the first action, replacing screenshot-and-guess with structured targeting that survives DOM changes. This project drives all of it through a single MCP server, with accessibility references and a six-rung escalation ladder that auto-selects targeting strategy. Vision and OCR remain available as fallbacks for canvas apps, custom-drawn UIs, and anything without useful structure. What makes the architecture more than a Chrome controller is the workflow layer alongside it. While the agent works, network traffic is captured and the underlying API patterns extracted into replayable flows. On subsequent runs the agent skips the browser entirely and executes direct HTTP calls — millisecond execution at a fraction of the inference cost. Credentials and TOTP seeds live in an OS-level vault, so replay works across sessions and machines without secrets ever touching chat context. Optional personal-assistant mode extends the same engine beyond Chrome to native Windows applications via UI Automation and OCR-based control of anything else, for workflows where browser scope isn't enough. Most agentic browser tooling today is either tied to a vendor's cloud or wraps a single automation library. This stack stays fully open, MCP-native, and self-hostable — running over local STDIO, which removes the network hop between agent and tool at every step and meaningfully cuts step latency on multi-action tasks. Composable with any compatible host and any other MCP tool. The reasoning layer runs on AMD Developer Cloud with ROCm-hosted open vision and language models. No proprietary inference dependencies anywhere in the stack.