Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

YOLO

YOLO (You Only Look Once) is a state-of-the-art, real-time object detection algorithm that can quickly detect and locate objects within an image or video. The YOLO architecture works by taking an input and separating it into a grid of cells and each of these cells is in charge of detecting objects within that region. YOLO returns the bounding boxes containing all the objects in the image and predicts the probability of an object being in each of the boxes and also predicts a class probability to help identify the type of object it is. YOLO is a highly effective object detection algorithm and making YOLO and open-source project led the community to make several improvements in such a limited time.

General
Relese date2015
AuthorJoseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi
Paper(https://arxiv.org/abs/1506.02640)
TypeObject detection algorithm

YOLO - Resources

Learn even more about YOLO!

  • v7 Labs Blog "YOLO: Algorithm for Object Detection Explained".
  • YOLOv5 Repository Object detection architectures and models pretrained on the COCO dataset.
  • YOLOv6 Web demo Gradio demo for YOLOv6 for object detection on videos.
  • Hugging Face Spaces Test YOLOv7 in the browser with Hugging Face Spaces.

YOLO AI Technologies Hackathon projects

Discover innovative solutions crafted with YOLO AI Technologies, developed by our community members during our engaging hackathons.

SafeGuard Sentinel

SafeGuard Sentinel

Autonomous robots and AI agents are becoming increasingly common in warehouses, hospitals, construction sites, and public spaces. Yet most systems allow these agents to act freely, with no real-time oversight layer between intent and execution. SafeGuard Sentinel solves this. SafeGuard Sentinel is an AI governance layer that intercepts every proposed robot action before it executes. Using YOLOv8 computer vision, it analyzes the live environment to detect humans and obstacles. A rule-based safety policy engine then evaluates the action against 8 safety rules checking human proximity, speed limits, zone boundaries, and fleet-wide conflict and assigns a risk score from 0 to 100%. Every decision returns one of three verdicts: ALLOW, WARN, or BLOCK. An optional LLM reasoning layer then generates a plain-English explanation of why the decision was made, making the system fully explainable and auditable. Three advanced features make SafeGuard Sentinel production-realistic. Zone Mapping lets operators define restricted, warning, and safe areas directly on the camera feed actions near restricted zones automatically receive elevated risk scores. Multi-Robot Fleet Management tracks multiple agents simultaneously, with fleet-level rules that pause all movement when multiple robots are blocked. The Human Override Panel allows authorized operators to challenge any blocked action within a 2-minute window, with mandatory justification logged to a permanent audit trail. SafeGuard Sentinel demonstrates that autonomous systems don't have to choose between capability and safety. With the right governance layer, every action can be fast, explainable, and human-supervised.