Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Anthropic

Anthropic’s Constitutional AI training approach research focuses on developing AI systems safe by design and aligned with human values. By prioritizing safety, we can create strong and corrigible AI systems that are safe for humans to use.

Anthropic Claude

Claude is your friendly and versatile AI language model that can assist you as a company representative, research assistant, creative partner, or task automator.

Claude is Safe, Clever, and Yours. Built with safety at its core and with industry leading security practices, Claude can be customized to handle complex multi-step instructions and help you achieve your tasks.

You can easily use Claude for your app, and all necessary APIs, boilerplates, tutorials explaining how to do so and more, you can find on our Claude tech page.

Claude Code

Claude Code is a command-line tool from Anthropic for agentic coding. It enables Claude to refactor, debug, and manage code directly in the terminal. You can find more information on our Claude Code tech page.


Anthropic AI Technologies Hackathon projects

Discover innovative solutions crafted with Anthropic AI Technologies, developed by our community members during our engaging hackathons.

Thor v2 — RAG-Free Fitness Intelligence

Thor v2 — RAG-Free Fitness Intelligence

Thor v2 is a domain-expert fitness AI built on a single fine-tuned Qwen3-8B model trained on 7,118 carefully constructed instruction-response pairs spanning exercise science, nutrition, programming, injury screening, and population-specific guidance. Unlike RAG-based fitness apps that retrieve documents at query time, Thor v2 encodes knowledge directly into model weights during supervised fine-tuning on AMD MI300X hardware using ROCm. Evidence is referenced through compact citation keys — e.g. [CITE:NSCA_HYPERTROPHY_VOLUME] — that the model emits inline. A lightweight citation resolver validates these keys against a locked registry and surfaces the source document on demand. If the model emits an unknown key, it is rejected at runtime. Hallucinated citations are structurally impossible. The dataset covers 113 unique citation keys from 9 authoritative organisations — NSCA, ACSM, ISSN, NASM, HHS, USDA, NIH, CDC, and ExRx — with 80 exercise technique entries and 14 population profiles including senior, postpartum, teen, vegan, rehab return, and competitive athlete. Six conversational style variants (casual, research-nerd, anxious, skeptical, verbose, follow-up-first) are baked into training so the model adapts tone naturally without prompt engineering. Training results: 100% JSON contract pass rate across all eval prompts. Coach gating behavior confirmed — model asks clarifying questions before prescribing when context is missing, rather than giving generic advice. All responses emit valid citation_keys, follow_up_questions, and safety_notes fields. Adapter size: <350MB on top of a frozen 8B base. Built entirely on AMD MI300X (192GB HBM3, ROCm 6.3) using HuggingFace PEFT + TRL. One model. No retrieval. No vector database. The model knows. The resolver proves.

TempoGraph: Local Multimodal Video Analysis

TempoGraph: Local Multimodal Video Analysis

TempoGraph is a fully-local, privacy-preserving multimodal video analysis system that turns raw video files into rich structured outputs — entities, behaviors, transcripts, timelines, and interactive knowledge graphs — without sending a single frame to the cloud. Stage 1 — Frame Selection: Motion-aware sampling with static, moving, and auto camera modes. For moving cameras it estimates homography to separate object motion from camera movement, then identifies keyframes where motion peaks exceed a configurable sigma threshold. Stage 1.5 — Audio Transcription: Whisper.cpp running on Vulkan transcribes the full audio track to millisecond-accurate segments. Stage 2 — YOLO Detection: YOLO26 runs on 2nd GPU over every sampled frame, outputting normalized bounding boxes, class names, track IDs, and confidence scores. Stage 3 — Depth Estimation: Depth Anything V2 via HuggingFace Transformers adds per-detection mean depth to every bounding box, giving 3D spatial context to 2D detections. Stage 4 — Frame Scoring: Picks which frames the VLM actually sees. In keyframes mode, only motion-peak frames are forwarded. In scored mode, FrameScorer ranks all YOLO-scanned frames using a weighted combination of motion delta, new YOLO class appearances, tracked object churn, and IoU drop between frames — then fills the VLM budget with the highest-signal frames. Keyframes are always pinned in first regardless of mode. Stage 5 — VLM Captioning: Qwen3.5-VL-9B served by a custom llama.cpp build compiled for AMD ROCm/HIP, running on an AMD RX 9070 XT with a 100k-token context window. Frames are chunked and sent to the model alongside YOLO-derived annotations. Each chunk's summary seeds the next prompt for narrative continuity across the video. Stage 6 — Aggregation: A final text-only LLM call synthesizes all per-chunk captions and the audio transcript into a structured JSON with entities, visual events, audio events, and multimodal correlations linking what was said to what was seen.