Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Hugging Face Hub

The Hugging Face Hub is an open-source repository platform that hosts over one million machine learning models, datasets, and interactive applications. It serves as the central collaboration layer for the ML community, enabling developers to discover, share, version, and deploy models across every modality including text, vision, audio, and multimodal. Model checkpoints on the Hub are compatible with the Transformers, Diffusers, and Datasets libraries, and can be loaded in a few lines of code.

General
AuthorHugging Face
TypeML Model Repository and Collaboration Platform
Websitehuggingface.co
DocumentationHub Documentation
Repositorygithub.com/huggingface/huggingface_hub
Modelshuggingface.co/models
Datasetshuggingface.co/datasets

Start building with Hugging Face Hub

The Hub is the fastest way to get a pretrained model running in your project. Load any of the 1M+ checkpoints directly into Transformers or Diffusers with a single function call, or browse the Hub to find the right base model for your use case. You can host your own models privately and share fine-tuned adapters with the community without uploading full model weights. During AMD-sponsored hackathons on lablab.ai, participants pull models from the Hub, fine-tune or build on them using AMD Developer Cloud GPUs, and publish their final projects back to the Hub as a Space. Explore what the community has built at Hugging Face Use Cases and Applications.

Hugging Face Hub Tutorials


Getting Started


Key Features

1M+ models Text, vision, audio, multimodal, and specialized domain models from top research teams and companies including Meta, Mistral, Google, and Alibaba Cloud.

Private repositories Host proprietary models and datasets with access controls. Upgrade to a PRO or Enterprise account for private inference endpoints.

Model cards Structured documentation for model limitations, intended use, training details, and evaluation results — standardized across all public checkpoints.

Version control Git-based versioning with LFS support for large files. Every model and dataset on the Hub has a full commit history.

Fine-tuned adapters Share and reuse LoRA and PEFT adapters without uploading full model weights. Adapters reference their base model and load in seconds.


Libraries

  • Transformers Unified API for pretrained models across text, vision, and audio
  • huggingface_hub Python SDK for Hub authentication, uploads, and downloads
  • Datasets Efficient access to Hub datasets with streaming and arrow-based caching
  • PEFT Parameter-efficient fine-tuning (LoRA, QLoRA, prefix tuning)
  • Optimum-AMD Optimized inference and training for AMD hardware via ROCm

huggingface HuggingFace Hub AI technology Hackathon projects

Discover innovative solutions crafted with huggingface HuggingFace Hub AI technology, developed by our community members during our engaging hackathons.

Thor v2 — RAG-Free Fitness Intelligence

Thor v2 — RAG-Free Fitness Intelligence

Thor v2 is a domain-expert fitness AI built on a single fine-tuned Qwen3-8B model trained on 7,118 carefully constructed instruction-response pairs spanning exercise science, nutrition, programming, injury screening, and population-specific guidance. Unlike RAG-based fitness apps that retrieve documents at query time, Thor v2 encodes knowledge directly into model weights during supervised fine-tuning on AMD MI300X hardware using ROCm. Evidence is referenced through compact citation keys — e.g. [CITE:NSCA_HYPERTROPHY_VOLUME] — that the model emits inline. A lightweight citation resolver validates these keys against a locked registry and surfaces the source document on demand. If the model emits an unknown key, it is rejected at runtime. Hallucinated citations are structurally impossible. The dataset covers 113 unique citation keys from 9 authoritative organisations — NSCA, ACSM, ISSN, NASM, HHS, USDA, NIH, CDC, and ExRx — with 80 exercise technique entries and 14 population profiles including senior, postpartum, teen, vegan, rehab return, and competitive athlete. Six conversational style variants (casual, research-nerd, anxious, skeptical, verbose, follow-up-first) are baked into training so the model adapts tone naturally without prompt engineering. Training results: 100% JSON contract pass rate across all eval prompts. Coach gating behavior confirmed — model asks clarifying questions before prescribing when context is missing, rather than giving generic advice. All responses emit valid citation_keys, follow_up_questions, and safety_notes fields. Adapter size: <350MB on top of a frozen 8B base. Built entirely on AMD MI300X (192GB HBM3, ROCm 6.3) using HuggingFace PEFT + TRL. One model. No retrieval. No vector database. The model knows. The resolver proves.

ROCmPort AI

ROCmPort AI

CUDA-to-ROCm migration fails not at translation but at correctness. hipify-clang renames APIs mechanically. It cannot detect that a reduction kernel assuming warpSize=32 will silently produce wrong results on AMD wavefront-64. Lanes 32 through 63 are skipped. The code compiles. The output is wrong. Nobody tells you. ROCmPort AI is a closed-loop agentic system built to catch exactly this class of bug before it reaches production. The pipeline runs five agents in sequence. The Analyzer performs a static scan before any LLM call, grounding the system in what is actually in the code. The Translator runs hipify-clang as a first pass then applies LLM corrections for architecture-specific issues the tool cannot handle. The Optimizer applies MI300X-specific changes: wavefront-64 alignment, LDS bank conflict padding, shared memory tiling. The Tester compiles with hipcc and profiles with rocprof on real AMD hardware. The Coordinator evaluates the profiler output and decides whether to iterate or finalize. All four demo kernels were compiled and profiled on a real AMD Instinct MI300X on AMD Developer Cloud. gfx942. ROCm 7.2. data_source: real_rocm. The primary model is Qwen2.5-Coder-32B-Instruct, purpose-built for code reasoning tasks. Groq LLaMA-3.3-70B handles log parsing as a cost-efficient fallback. In production, Qwen runs via vLLM on the MI300X instance itself. Failure cases are documented explicitly, including library-heavy CUDA using CUB, Thrust, or cuDNN, which requires manual review after ROCmPort output. This is intentional. Credibility requires honesty about scope. The value is not speed. It is correctness before execution, and a decision trace that a senior engineer can audit.

Raksha Setu: AI emergency bridge

Raksha Setu: AI emergency bridge

In India and across the developing world, over 2 billion people lack reliable emergency medical response. The average ambulance response time is 18–20 minutes. Cardiac arrest causes irreversible brain damage in 4. That 14-minute gap is not a medical failure, it is an infrastructure gap. Nobody owns it. Raksha Setu does. Raksha Setu (Sanskrit: the bridge) is an AI-powered emergency response mesh built on AMD Developer Cloud. When a distress call is received, Qwen 2.5 running on AMD MI300X performs medical triage in under 2 seconds, classifying case severity, type, and the minimum responder qualification required. It then dispatches the three nearest trained Angel volunteers from a verified, gamified network ranked by certification level. The first to confirm arrives on site in under 5 minutes, guided step-by-step by real-time AI protocol streamed to their phone. When the ambulance eventually arrives, it receives a structured AI-generated handoff log, vitals, timeline, and all actions taken, before the crew even enters the building. AMD MI300X is not incidental to this product. Inference latency here is measured in lives. Every millisecond of triage delay is a millisecond closer to permanent brain damage. ROCm gives us the speed; the MI300X gives us the scale to handle concurrent emergencies across an entire city. Raksha Setu is built as infrastructure, not an app. The same Angel volunteer mesh powers Raksha Doot, our Phase 2 women's safety layer, using identical volunteers, identical AMD compute, and an AI threat classification system that closes a different but equally critical gap.

Hands for AI

Hands for AI

Chrome offers an unusually deep programmatic surface to AI agents — the DevTools Protocol, the accessibility tree, JavaScript evaluation, network interception, multi-tab and multi-context isolation. For most of the web, that surface is enough to skip visual reasoning entirely from the first action, replacing screenshot-and-guess with structured targeting that survives DOM changes. This project drives all of it through a single MCP server, with accessibility references and a six-rung escalation ladder that auto-selects targeting strategy. Vision and OCR remain available as fallbacks for canvas apps, custom-drawn UIs, and anything without useful structure. What makes the architecture more than a Chrome controller is the workflow layer alongside it. While the agent works, network traffic is captured and the underlying API patterns extracted into replayable flows. On subsequent runs the agent skips the browser entirely and executes direct HTTP calls — millisecond execution at a fraction of the inference cost. Credentials and TOTP seeds live in an OS-level vault, so replay works across sessions and machines without secrets ever touching chat context. Optional personal-assistant mode extends the same engine beyond Chrome to native Windows applications via UI Automation and OCR-based control of anything else, for workflows where browser scope isn't enough. Most agentic browser tooling today is either tied to a vendor's cloud or wraps a single automation library. This stack stays fully open, MCP-native, and self-hostable — running over local STDIO, which removes the network hop between agent and tool at every step and meaningfully cuts step latency on multi-action tasks. Composable with any compatible host and any other MCP tool. The reasoning layer runs on AMD Developer Cloud with ROCm-hosted open vision and language models. No proprietary inference dependencies anywhere in the stack.