Hero Banner

Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Chroma

Chroma is building the database that learns. It is an open-source AI-native embedding database. Chroma makes it easy to build LLM apps by making knowledge, facts, and skills pluggable for LLMs. The fastest way to build Python or JavaScript LLM apps with memory

General
Relese date2023
AuthorChroma
Typeembedding database

Tutorials

Great tutorials on how to build with Chroma

Chroma - Helpful Resources

Check it out to become Chroma Master!

Chroma - Clients

Connect Chroma to your Project!

Chroma - integration

Get started with Chroma!


Chroma AI technology page Hackathon projects

Discover innovative solutions crafted with Chroma AI technology page, developed by our community members during our engaging hackathons.

Deriv Sentinel -Self-Healing AI WAF for LLM Agents

Deriv Sentinel -Self-Healing AI WAF for LLM Agents

Deriv Sentinel is an AI-powered Web Application Firewall that protects LLM agents from prompt injection and data leakage through a continuous red-team-and-heal cycle. The Problem: Traditional WAFs can't protect AI agents. Prompt injection is the SQL injection of the AI era - natural language attacks bypass conventional input validation, and patching one technique just leads attackers to find new ones. Our Solution: Instead of waiting for attacks, Deriv Sentinel attacks itself first, then autonomously patches the vulnerabilities it discovers. How It Works: 1. Attack — An attacker model generates realistic social engineering prompts enriched with Shadow RAG context (fake internal documents as honeypots). 2. Defend — Bastion (llama3.1:8b), our protected LLM loaded with simulated internal data, responds to each attack. 3. Audit — ShieldGemma (shieldgemma:2b) audits every response for data leakage and policy violations, backed by deterministic pattern matching as a second detection layer. 4. Heal — When a breach is detected, the Heal Engine injects a vaccine guardrail and redacts the exploited knowledge section. The same attack now gets blocked — without retraining. 5. Human-in-the-Loop — Analysts can approve/reject heals or enable auto-heal for autonomous defense. Key Innovations: - Knowledge Base Redaction — We remove leaked data from context entirely. LLMs can't leak what they don't have. - Multi-Layered Defense — AI auditor + deterministic matching + post-processing enforcement. - Instant, Reversible Fixes — Runtime prompt patches. No fine-tuning, no redeployment. - Adaptive — Each breach teaches the system a new defense. Demo: Reset → Run red-team → Bastion leaks secrets → ShieldGemma detects → Heal applied → Same attack blocked. Self-healing proven in five minutes.

Odin AI Analyst Companion

Odin AI Analyst Companion

Odin is an AI-powered trading companion that solves a critical problem: traders repeatedly make the same mistakes because they forget past market conditions. THE PROBLEM You buy AAPL during earnings, lose 5%, then three months later—same news, same mistake, same loss. Traditional platforms don't learn from your behavior. THE SOLUTION Odin learns from every trade you make and proactively warns you when similar market conditions arise. HOW IT WORKS 1. LEARNING PHASE Every trade is enriched with: - News context (DuckDuckGo + GPT-4 summarization) - Sentiment scores (Reddit, social media, technical indicators) - Profit/loss outcomes This data is converted into vector embeddings and stored in ChromaDB, creating a searchable memory of your trading psychology. 2. PATTERN MATCHING When new market signals arrive, Sentinel searches your history using vector similarity (cosine distance). If similarity >35%, it generates alerts: Opportunities - "Last time this happened, you made 15% profit" Risk Warnings - "Last time this happened, you lost $286" 3. FOUR CORE FEATURES - Multi-Agent Analysis: 7 AI agents (Market, Fundamentals, News, Social, Bull/Bear, Trader, Risk Manager) collaborate for BUY/HOLD/SELL recommendations - Autonomous Trading: Fully automated with signal gathering, analysis, execution, and position monitoring - Real-Time Trading: WebSocket streaming, live prices, context tracking - Intelligent Alerts: Pattern recognition that warns BEFORE you trade TECH STACK Frontend: Next.js, TypeScript, Tailwind CSS, WebSocket Backend: FastAPI, ChromaDB (vector DB), LangChain, GPT-4, Alpaca API ML: Vector embeddings, cosine similarity, multi-agent systems IMPACT Prevents losses, finds opportunities, saves time, reduces emotional trading. Production-

Qubic Liquidation Guardian

Qubic Liquidation Guardian

Qubic Liquidation Guardian is a hybrid Track 1 + Track 2 project built by CrewX that brings real-time liquidation protection, institutional-grade risk analysis, and automated alerting to the Qubic Network. The problem is simple: DeFi liquidations happen instantly, but users do not get instant signals. As a result, borrowers lose capital, protocols lose liquidity, and investors hesitate to adopt new systems without safety infrastructure. Inspired by this gap, Qubic Liquidation Guardian provides a complete safety layer over lending protocols deployed on the Nostromo Launchpad. At its core, the system includes an on-chain event listener and a real-time risk scoring engine, which analyzes: • Health Factor • Liquidation Proximity • Total Debt Exposure • Active Positions These metrics are combined into a 0–100 Risk Score, dynamically updated for each borrower. Based on the score, users are automatically classified into Low, Medium, High, and Critical risk tiers, enabling rapid decision-making. The platform also includes advanced features such as: • Whale Watch: Detect large-value transactions to anticipate market shifts • Smart Alerts: Severity-based notifications connected to any tool • Auto-Airdrop: Rewards for users who resolve high-risk positions • Crash Simulator: A built-in testing environment to simulate -70% market dumps, rebounds, and full resets to verify protocol safety Qubic Liquidation Guardian is designed to strengthen the Nostromo ecosystem by improving investor confidence, increasing protocol safety, and enabling risk-aware liquidity management. With over 35 production-ready API endpoints, an edge-distributed database, and a Next.js 15 architecture, the application is fully deployable and already live for testing. Ultimately, this project delivers exactly what new chains and protocols need: speed, stability, transparency, and automation—making Qubic safer for everyone.

AskTheWeb AI Website QnA Assistant

AskTheWeb AI Website QnA Assistant

AskTheWeb: AI-Powered Website Question Answering System AskTheWeb (also known as WebMind AI) is an advanced Question-Answering application designed to tackle information overload by transforming static websites into interactive, conversational experiences. Built using Streamlit, the application leverages the speed and intelligence of Google Gemini 2.0 Flash to understand and synthesize web content in real-time. The core functionality relies on a robust RAG (Retrieval-Augmented Generation) pipeline designed for accuracy and persistence. When a user inputs a URL, the system employs Requests and BeautifulSoup to scrape and clean the HTML data, acting as an efficient ETL transformer to remove messy code and script tags. This cleaned text is converted into high-dimensional vector embeddings using Google’s GenAI SDK and stored in ChromaDB, which acts as the application's "Long-Term Memory". This architecture allows users—such as students, researchers, and analysts—to ask complex natural language questions and receive instant answers that are grounded specifically in the context of the provided URL. Unlike standard chatbots, AskTheWeb ensures that answers are relevant to the specific source material provided. We also addressed significant technical challenges during development, specifically ensuring compatibility with cloud environments. We implemented a custom solution using pysqlite3-binary to patch SQLite version incompatibilities on Streamlit Cloud, ensuring the vector database runs smoothly in production. The result is a scalable, modular tool that makes researching the web faster and more intuitive.

AwesomeAGI

AwesomeAGI

AwesomeAGI is an ambitious open-source initiative by Infinity Wave AI focused on bridging the critical gap between today's powerful but static AI models and the future of true, adaptive Artificial General Intelligence (AGI). Our project addresses a fundamental limitation in current AI systems—including large language models (LLMs) and multimodal architectures: their inherent static nature. While these models can store extensive context and perform impressive tasks, they lack a dynamic, internal world model, cannot adapt in real-time to new information or environments, and are entirely reliant on pre-trained knowledge. They are brilliant responders, but poor learners and reasoners. Our Solution: The AGI Capability Layer AwesomeAGI is not another AI model. It is a sophisticated framework and dynamic knowledge engine that operates as a capability layer on top of existing AI infrastructures. By integrating with models like GPT, Claude, and others, AwesomeAGI instills them with the core traits of general intelligence: Reasoning: Moving beyond pattern recognition to logical deduction and problem-solving. Dynamic Memory: Maintaining and continuously updating an internal representation of the world, events, and interactions. Adaptability: Learning from new data, feedback, and environmental changes in real time, without constant retraining. Self-Improvement: Enabling systems to refine their own processes and strategies autonomously. This transformation turns passive AI tools into autonomous, environment-aware agents capable of planning, acting, and collaborating to solve complex problems. Our Mission & Vision Our mission is to research, design, and develop a simple yet powerful AGI architecture and make it accessible to everyone. We envision a future where: Researchers have an open-source framework to accelerate AGI development. Enterprises can deploy practical, modular AGI solutions for automation, research, and decision-making.