Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

OpenAI's Assistants API

OpenAI's Assistants API simplifies AI integration for developers, eliminating the need for managing conversation histories and providing access to tools like Code Interpreter and Retrieval. The API also allows developers to integrate their own tools, making it a versatile platform for AI assistant development.

General
AuthorOpenAI
DocumentationLink
TypeAI Assistant

Model Overview

The Assistants API enables developers to create AI assistants using OpenAI models and tools. It supports various functionalities such as managing conversation threads, triggering responses, and integrating customized tools.

Assistants API Tutorials


Technology Resources

The Assistants API allows developers to construct AI assistants within their applications. An assistant can leverage models, tools, and knowledge to respond to user queries effectively. Presently supporting Code Interpreter, Retrieval, and Function calling, the API aims to introduce more tools developed by OpenAI while also allowing user-provided tools on the platform.

To explore its capabilities, developers can use the Assistants Playground or follow the integration guide in the official documentation. The integration process involves defining an Assistant, enabling tools, managing conversation threads, and triggering responses.

OpenAI Assistants API AI technology Hackathon projects

Discover innovative solutions crafted with OpenAI Assistants API AI technology, developed by our community members during our engaging hackathons.

AI-Powered Smart Glasses Augment Reality Assistant

AI-Powered Smart Glasses Augment Reality Assistant

AIYA is a multimodal AI assistant for smart glasses that delivers real-time intelligence — vision, language, and speech — directly in your field of view, hands-free. The Solution AIYA addresses these challenges through a suite of specialized AI agents embedded within smart glasses. Each agent is purpose-built for a specific use case, working together as an orchestrated system: AIYA runs a suite of specialized AI agents on smart glasses, each purpose-built and working together: - Vision Agent — analyzes live frames via GPT-4o/Gemini for scene understanding and object detection. - Translation Agent — OCR detects foreign text and overlays translations instantly on the lens. - Chat Assistant — voice-driven AI guidance, hands-free, no device needed. - Navigation Agent — real-time AR turn-by-turn directions overlaid in view. - Safety & Hazard Agent — monitors for warning signs and hazardous zones, alerting the wearer immediately. Technology Stack AIYA is built on a modular architecture: webcam/glass feed captured via WebRTC, frame analysis via GPT-4o Vision or Gemini, OCR powered by Tesseract.js, gesture and object detection via MediaPipe, avatar-driven responses via D-ID, and AR overlays rendered on-device. The system is designed to be lightweight, low-latency, and deployable on Epson Moverio or Meta RayBan smart glasses. Why AIYA Wins AIYA is not a single-purpose tool — it is an extensible AI agent platform for the physical world. It reduces language barriers, improves safety outcomes, accelerates workforce productivity, and delivers personalized, context-aware intelligence exactly when and where it is needed. The future of AI is not on a screen — it is in your field of view.

OnboardEase

OnboardEase

AI-Assisted Onboarding OS- is a simulation-driven training platform that replaces passive onboarding with interactive, role-based execution environments. Instead of only reading documents or watching videos, employees learn by doing inside AI-powered playgrounds tailored to their department. Developers work in secure coding sandboxes with access to curated company repositories, fix real bugs, ship features, and receive AI-driven code reviews. Sales teams practice live deal-closing in AI IVR simulations that generate objections, analyze tone, and score closing probability. Marketing teams run virtual campaigns, optimize budgets, and test messaging strategies with AI feedback on performance and ROI. Product managers handle roadmap trade-offs and stakeholder pressure simulations, while HR professionals conduct AI-driven interview and conflict-resolution scenarios. Leadership teams can navigate crisis management and decision-making simulations to strengthen strategic thinking. The platform integrates company documents, SOPs, and knowledge bases using contextual AI, ensuring every scenario reflects real organizational workflows. Multi-agent systems simulate real-world complexity, evaluate decisions, and provide structured feedback, skill scoring, and certification. Gamification elements such as progress tracking, leaderboards, and promotion-readiness scores increase engagement and accountability. Admin dashboards allow companies to measure onboarding effectiveness, identify skill gaps, and standardize training across teams. By combining simulation, AI evaluation, and company-specific context, the platform accelerates productivity, reduces ramp-up time, and transforms onboarding into measurable, execution-based readiness.

AutoClaw - Self-Evolving Agent Economy

AutoClaw - Self-Evolving Agent Economy

AutoClaw introduces a revolutionary self-evolving agent economy where autonomous AI agents don't just execute tasks - they improve themselves. Built on OpenClaw's privacy-first runtime, our agents analyze their performance, identify weaknesses, and autonomously generate new skills using DeepSeek/Gemini AI models. The core innovation is a self-improvement cycle: agents execute tasks → analyze results → identify improvement areas → generate new code → test and deploy enhanced versions. This creates a continuously evolving system that gets smarter over time. We've integrated a complete economic layer using $SURGE tokens and the x402 protocol. Premium skills charge micro-payments (0.1-1.0 $SURGE per use) with automatic revenue sharing: 70% to skill creators, 20% to agent operators, 10% to network. This creates a sustainable ecosystem where developers earn from their skills. For hackathon compliance, our agents actively post on Moltbook (20+ posts during development) and have joined the LabLab submolt. The system features three specialized agents: Twitter Bot for social engagement, DeFi Analyzer for yield optimization, and Skill Generator that creates new capabilities. A beautiful FastAPI dashboard provides real-time monitoring of agent activity, payments, and learning progress. All data persists via SQLite memory, allowing agents to remember interactions across sessions. Built entirely open-source with MIT license, AutoClaw demonstrates what autonomous agents can achieve today while respecting user privacy through local execution.

Fleet Bridge

Fleet Bridge

FleetBridge: See All Your Robots in One Place That Ocado warehouse fire in 2023? $110M in damage because robots from different companies couldn't see each other. FleetBridge fixes that. Live map shows all 24 robots at once—Amazon, Balyo, Gemini, all on one screen. Click any robot and see everything: battery, current job, where it's been, errors. Robots move smoothly with zoom, pan, zone overlays. When two are about to crash, you get a red line between them. Send one to charge and watch the animated path appear. Just type questions instead of clicking through menus. "Which robots are low on battery?" "What's happening in Zone B?" Simple stuff answers instantly. Complex questions route to Gemini with full context—positions, tasks, alerts, everything. No more cryptic vendor codes. The system translates all errors across manufacturers. Balyo's "OBSTACLE_TIMEOUT" = Amazon's "E-2002". Click any error for actual fix steps with checkboxes you complete before clearing it. Assign tasks by picking from 13 warehouse presets or typing naturally: "Move inventory from Zone A to Station 5." AI handles the details. Analytics show which robots work hardest, vendor performance comparisons, congestion hotspots. Alert feed catches issues before they cause damage—collision warnings, traffic jams, low batteries, blocked paths. Real-time updates every 500ms. The backend simulates realistic warehouse ops—movement, battery drain, auto task assignments, random errors. Chat panel for deeper conversations with the AI. Path visualization windows showing robot trails and destinations. Error knowledge base with cross-vendor translations and remediation guides. One dashboard replaces three vendor systems. No switching screens. Just clear fleet visibility.