Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Hugging Face Spaces

Hugging Face Spaces is a hosting platform for interactive machine learning applications. Developers build demos and tools using Gradio or Streamlit, then deploy them on Hugging Face infrastructure and receive a public URL. Spaces are free to run on shared CPU instances and can be upgraded to GPU-backed hardware for workloads that need faster inference. They are widely used during hackathons and AI events to share working prototypes without managing any server infrastructure.

General
AuthorHugging Face
TypeAI Application Hosting Platform
Websitehuggingface.co/spaces
DocumentationSpaces Documentation
Hardware OptionsCPU (free), T4, A10G, A100, H100
FrameworksGradio, Streamlit, Docker

Start building with Hugging Face Spaces

Spaces is the quickest path from a working model to a shareable demo. Connect a model from the Hub, wrap it in a Gradio interface, and push to a Space — the application goes live with a public URL in minutes. GPU instances are available on an hourly basis for workloads that need real compute. During hackathons on lablab.ai, submitting a Hugging Face Space link is a standard way to present a working project. Spaces created under an event's Hugging Face organization are publicly discoverable, and community members can vote with likes. Explore examples at Hugging Face Use Cases and Applications.

Hugging Face Spaces Tutorials


Getting Started


Key Features

Instant deployment Push a Gradio or Streamlit app to a Space repository and get a live URL without any server configuration or DevOps setup.

GPU hardware tiers Upgrade to T4, A10G, A100, or H100 instances for workloads that need GPU acceleration. Pricing is hourly with no long-term commitment.

Organization Spaces Create Spaces under a team or event organization so project submissions stay grouped and discoverable by judges and community members.

Persistent storage Attach a storage volume to a Space for stateful applications that need to read or write files between requests.

Community discovery Spaces are publicly indexed on Hugging Face and sortable by likes, making them a practical way to share and showcase AI projects.


Boilerplates

huggingface HuggingFace Spaces AI technology Hackathon projects

Discover innovative solutions crafted with huggingface HuggingFace Spaces AI technology, developed by our community members during our engaging hackathons.

Boundary Forge

Boundary Forge

Boundary Forge is a model-agnostic AI safety pipeline that helps enterprises deploy LLMs with measurable confidence. Instead of relying on manual red-teaming or hoping a system prompt is enough, Boundary Forge automatically attacks a model, identifies where it behaves unsafely or inconsistently, and converts those discovered failures into runtime guardrails. For this hackathon, we demonstrated Boundary Forge using Qwen 2.5-72B on AMD Developer Cloud with AMD MI300X. Qwen powered the adversarial red-team workflow and was also the model under test, allowing the system to expose real behavioral failure boundaries such as jailbreak attempts, policy drift, unsafe financial guidance, KYC bypass, fraud patterns, coercion signals, asset concealment, and inconsistent refusals. The pipeline works in five stages: generate adversarial probes, run high-throughput model inference, mathematically detect boundary failures, compile those failures into semantic safety rules, and enforce them through middleware before risky prompts reach the LLM. This creates a practical enterprise safety layer that can block, flag, or ask for clarification in real time. The important point is that Boundary Forge is not tied to one model. Qwen 2.5-72B was used to demonstrate the system, but the architecture can benchmark and harden other open-source or proprietary models as well. The goal is to improve models exactly where they fail and make model evaluation repeatable across different deployments. In our AMD Cloud production run with Qwen 2.5-72B, Boundary Forge generated 1,009 unique adversarial probes, fired 4,036 total inferences, discovered 25 boundary failures, and compiled 15 semantic safety rules. The middleware intercepted 68% of known attacks and reduced the effective failure rate from 2.48% to 0.79%. Boundary Forge turns AI safety into an automated engineering workflow: attack, measure, learn, protect, and benchmark again.

Thor v2 — RAG-Free Fitness Intelligence

Thor v2 — RAG-Free Fitness Intelligence

Thor v2 is a domain-expert fitness AI built on a single fine-tuned Qwen3-8B model trained on 7,118 carefully constructed instruction-response pairs spanning exercise science, nutrition, programming, injury screening, and population-specific guidance. Unlike RAG-based fitness apps that retrieve documents at query time, Thor v2 encodes knowledge directly into model weights during supervised fine-tuning on AMD MI300X hardware using ROCm. Evidence is referenced through compact citation keys — e.g. [CITE:NSCA_HYPERTROPHY_VOLUME] — that the model emits inline. A lightweight citation resolver validates these keys against a locked registry and surfaces the source document on demand. If the model emits an unknown key, it is rejected at runtime. Hallucinated citations are structurally impossible. The dataset covers 113 unique citation keys from 9 authoritative organisations — NSCA, ACSM, ISSN, NASM, HHS, USDA, NIH, CDC, and ExRx — with 80 exercise technique entries and 14 population profiles including senior, postpartum, teen, vegan, rehab return, and competitive athlete. Six conversational style variants (casual, research-nerd, anxious, skeptical, verbose, follow-up-first) are baked into training so the model adapts tone naturally without prompt engineering. Training results: 100% JSON contract pass rate across all eval prompts. Coach gating behavior confirmed — model asks clarifying questions before prescribing when context is missing, rather than giving generic advice. All responses emit valid citation_keys, follow_up_questions, and safety_notes fields. Adapter size: <350MB on top of a frozen 8B base. Built entirely on AMD MI300X (192GB HBM3, ROCm 6.3) using HuggingFace PEFT + TRL. One model. No retrieval. No vector database. The model knows. The resolver proves.

ROCmPort AI

ROCmPort AI

CUDA-to-ROCm migration fails not at translation but at correctness. hipify-clang renames APIs mechanically. It cannot detect that a reduction kernel assuming warpSize=32 will silently produce wrong results on AMD wavefront-64. Lanes 32 through 63 are skipped. The code compiles. The output is wrong. Nobody tells you. ROCmPort AI is a closed-loop agentic system built to catch exactly this class of bug before it reaches production. The pipeline runs five agents in sequence. The Analyzer performs a static scan before any LLM call, grounding the system in what is actually in the code. The Translator runs hipify-clang as a first pass then applies LLM corrections for architecture-specific issues the tool cannot handle. The Optimizer applies MI300X-specific changes: wavefront-64 alignment, LDS bank conflict padding, shared memory tiling. The Tester compiles with hipcc and profiles with rocprof on real AMD hardware. The Coordinator evaluates the profiler output and decides whether to iterate or finalize. All four demo kernels were compiled and profiled on a real AMD Instinct MI300X on AMD Developer Cloud. gfx942. ROCm 7.2. data_source: real_rocm. The primary model is Qwen2.5-Coder-32B-Instruct, purpose-built for code reasoning tasks. Groq LLaMA-3.3-70B handles log parsing as a cost-efficient fallback. In production, Qwen runs via vLLM on the MI300X instance itself. Failure cases are documented explicitly, including library-heavy CUDA using CUB, Thrust, or cuDNN, which requires manual review after ROCmPort output. This is intentional. Credibility requires honesty about scope. The value is not speed. It is correctness before execution, and a decision trace that a senior engineer can audit.

AssemblyMind

AssemblyMind

Assembly Mind is a real-time multimodal agent for hardware assembly auditing. In electronics prototyping, simple assembly errors, reversed polarity, incorrect connections, component mismatches, are a leading cause of board failure. These errors are typically caught only after power-on, resulting in destroyed components and hours of debugging. Existing Automated Optical Inspection systems are designed for high-volume manufacturing floors and require dedicated fixtures, pre-programmed rules, and substantial capital investment. They do not serve the engineer working on a one-off prototype at a workbench. Assembly Mind addresses this gap by combining schematic document understanding with live visual perception. The system ingests schematic PDFs or netlists and processes live camera feeds from a standard webcam. A vision-language model, running on AMD Instinct MI300X GPUs via ROCm, performs semantic reasoning across the schematic and physical assembly to detect discrepancies before power is applied. The agent outputs structured audit results and natural-language guidance, flagging errors such as incorrect component orientation or mismatched pin connections. For this hackathon, we demonstrate an end-to-end pipeline: schematic ingestion, real-time camera analysis, multimodal reasoning, and immediate feedback. Built with Qwen2.5-VL, LangChain for agentic orchestration, and Hugging Face Optimum-AMD for ROCm optimization, the system utilizes the MI300X's high memory bandwidth and large HBM3 capacity for efficient high-resolution image processing and long-context multimodal inference. The result is a practical tool that reduces rework time and prevents costly prototyping failures.