Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Streamlit: Effortless Front-Ends for Your Data Apps

Streamlit is a pioneering technology provider that specializes in turning data scripts into shareable web apps with minimal effort. Launched in 2018, Streamlit has gained popularity for its ease of use and efficiency, empowering data scientists and developers to create and deploy data-driven applications swiftly.

General
AuthorStreamlit
Repositoryhttps://github.com/streamlit/streamlit
TypeFramework for ML and data science apps

Key Features

  • Transforms Python scripts into interactive apps with simple annotations, dramatically reducing development time.
  • Facilitates real-time interactivity directly from Python code without requiring front-end expertise.
  • Supports hot-reloading, allowing instant app updates as the underlying code changes.
  • Provides built-in support for a wide array of widgets, enabling the addition of interactive features without additional coding.

Start building with Streamlit's products

Streamlit offers a range of features designed to simplify the process of app creation and deployment, enhancing productivity in data science and machine learning fields. Explore how you can leverage Streamlit to turn your data projects into interactive applications. Don’t forget to check out the innovative projects built with Streamlit at various tech meetups!

List of Streamlit's products

Streamlit Library

The Streamlit Library allows developers to quickly convert Python scripts into interactive web apps. This library is packed with easy-to-use functionalities that make it straightforward to add widgets, charts, maps, and media files, transforming complex data science projects into user-friendly applications.

Streamlit Sharing

Streamlit Sharing provides the hosting infrastructure to share Streamlit apps with the world. It simplifies deployment, enabling users to go from script to app in minutes on a secure and scalable platform.

Streamlit for Teams

Streamlit for Teams is designed for collaboration and enterprise usage, offering additional features like integration with existing databases, advanced security protocols, and customized control for managing user access and data privacy.

System Requirements

Streamlit is compatible with Linux, macOS, and Windows systems, requiring Python 3.6 or later. It typically runs with minimal hardware requirements, though performance scales with available resources. For optimal performance, a modern processor and sufficient RAM are recommended, with a stable internet connection for deploying apps using Streamlit Sharing. Modern browsers with JavaScript support are required to view and interact with the apps.

streamlit AI technology page Hackathon projects

Discover innovative solutions crafted with streamlit AI technology page, developed by our community members during our engaging hackathons.

SafeGuard Sentinel

SafeGuard Sentinel

Autonomous robots and AI agents are becoming increasingly common in warehouses, hospitals, construction sites, and public spaces. Yet most systems allow these agents to act freely, with no real-time oversight layer between intent and execution. SafeGuard Sentinel solves this. SafeGuard Sentinel is an AI governance layer that intercepts every proposed robot action before it executes. Using YOLOv8 computer vision, it analyzes the live environment to detect humans and obstacles. A rule-based safety policy engine then evaluates the action against 8 safety rules checking human proximity, speed limits, zone boundaries, and fleet-wide conflict and assigns a risk score from 0 to 100%. Every decision returns one of three verdicts: ALLOW, WARN, or BLOCK. An optional LLM reasoning layer then generates a plain-English explanation of why the decision was made, making the system fully explainable and auditable. Three advanced features make SafeGuard Sentinel production-realistic. Zone Mapping lets operators define restricted, warning, and safe areas directly on the camera feed actions near restricted zones automatically receive elevated risk scores. Multi-Robot Fleet Management tracks multiple agents simultaneously, with fleet-level rules that pause all movement when multiple robots are blocked. The Human Override Panel allows authorized operators to challenge any blocked action within a 2-minute window, with mandatory justification logged to a permanent audit trail. SafeGuard Sentinel demonstrates that autonomous systems don't have to choose between capability and safety. With the right governance layer, every action can be fast, explainable, and human-supervised.

ClawAudit - The Autonomous Web3 DevSecOps Agent

ClawAudit - The Autonomous Web3 DevSecOps Agent

In recent years, billions have been drained from Web3 protocols due to smart contract exploits. Traditional manual audits cost tens of thousands of dollars and take weeks to complete, leaving developers exposed. Code is money, and current defense systems are too slow. ClawAudit Sentinel solves this by democratizing blockchain security through "Shift-Left" automation. Built natively on OpenClaw and powered by Gemini 2.5 Pro, ClawAudit acts as an autonomous, real-time immune system directly inside the developer's workflow. Operating as a native CI/CD pipeline agent, ClawAudit intercepts GitHub Pull Requests to perform deep static analysis. It goes beyond flagging vulnerabilities—it explains the exact exploit scenario and autonomously comments securely patched code directly onto the PR (Auto-Remediation). To interface with the outside world, we engineered a complete DevSecOps alert pipeline using custom OpenClaw skills: Secure Paging: Instantly routes a detailed vulnerability breakdown to the on-call developer via Telegram. Dual-Memory Architecture: Posts a sanitized, zero-knowledge cryptographic receipt to Moltbook ( for eg here the 'lablab' submolt), logging the audit publicly without exposing the zero-day flaw. The B2B SaaS Vision: I have tried building a scalable business for the Surge ecosystem. ClawAudit operates as a metered API gateway, allowing Web3 companies to inject our agent into their repositories for continuous security scanning billed per execution. ClawAudit is designed to be the scalable future of Web3 security and DeFi tools giving the developers the power to focus on solutions and not trials.

RoboDk based Quantum state simulator

RoboDk based Quantum state simulator

The Quantum‑Enhanced Robotics Simulator (QERS) is a fully‑functional digital testbed for designing, testing and validating robotic systems without physical hardware. Our goal is to narrow the reality gap between simulation and the real world by combining deterministic macro‑physics from engines like PyBullet with a quantum‑stochastic plugin that injects realistic noise via Qiskit. The simulator supports deterministic, stochastic and quantum‑perturbed stepping modes and exposes a FastAPI REST API for running jobs, retrieving metrics and managing assets. A Celery/Redis job system queues and executes simulation runs asynchronously, while the Next.js/Three.js web application provides a real‑time dashboard with a 3D viewport, scene tree, metrics panel and controls to toggle between classical domain randomization and quantum noise. Reality profiles define configurable dynamics, sensor and actuation parameters, enabling multi‑profile evaluation of policies. QERS computes gap metrics such as G<sub>dyn</sub>, G<sub>perc</sub> and G<sub>perf</sub> and includes scripts for benchmarking across profiles and generating reports. Users can import URDFs, run batch simulations and compute performance drops and rank stability. Future phases will add mesh segmentation, an AI‑driven text‑to‑algorithm pipeline for generating planner and controller skeletons, and neural‑augmented simulation informed by real data. By combining quantum computing, domain randomization, residual learning and modern web technologies, QERS demonstrates a practical path to sim‑to‑real transfer and a production‑minded robotics startup.

HUMOS

HUMOS

Robot training is expensive(VLA) and hard. What if a first person view of humans hand can be used for training robots? Why is this helpful? Anyone with a mac or an iphone can then start collect training data. They can get paid for it and robotic data can be accelerated. Here's the IDEA real time camera data is used along with media pipe, sam3 , yolo and vlm so the egocentric data can be enrinched with accurate masks from sam3, reasoning from vlm, mediapipe can detect joints data. All of this is super useful for training robots in a cheap and fast way. From lack of data , abundance of data is achieved fast. This is more useful especially for specialized tasks, where only certain humans can do and they are in remote places Object tracking with persistent IDs across frames Zero-shot state classification via SigLIP — 200x faster than VLM for open/closed/ajar labels Navigation state classification via VLM (doors, drawers, handles → open/closed/ajar/blocked) Temporal diff — VLM compares consecutive frames to detect state transitions Navigation timeline — per-object state timeline with colored bars and transition events Hand-object interactions via MediaPipe Ground truth export — structured JSON with per-frame annotations Accuracy evaluation — compare predictions against manual labels Live perception — real-time webcam inference with auto-recording and post-analysis H.264 video export — browser-playable annotated videos with in-app preview Per-frame timing — inference latency breakdown per model stage Tried Gemini 3 Flash, Cosmos and gemma for vlm