Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

LLaMA (Large Language Model Meta AI)

LLaMA is a state-of-the-art foundational large language model designed to help researchers advance their work in the subfield of AI. It is available in multiple sizes (7B, 13B, 33B, and 65B parameters) and aims to democratize access to large language models by requiring less computing power and resources for training and deployment. LLaMA is developed by the FAIR team of Meta AI and has been trained on a large set of unlabeled data, making it ideal for fine-tuning for a variety of tasks.

General
Release date2023
AuthorMeta AI FAIR Team
Model sizes7B, 13B, 33B, 65B parameters
Model ArchitectureTransformer
Training data sourceCCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange
Supported languages20 languages with Latin and Cyrillic alphabets

Start building with LLaMA

LLaMA provides an opportunity for researchers and developers to study large language models and explore their applications in various domains. To get started with LLaMA, you can access its code through the GitHub repository.

Important links about LLaMA in one place:


Meta LLaMA AI technology Hackathon projects

Discover innovative solutions crafted with Meta LLaMA AI technology, developed by our community members during our engaging hackathons.

NeuroPay — Pay-Per-Inference AI on Arc

NeuroPay — Pay-Per-Inference AI on Arc

NeuroPay is a live, deployed Pay-Per-Inference AI API competing in the Usage-Based Compute Billing track. It is hosted at https://neuropay-arc.onrender.com and powered end-to-end by Circle's infrastructure on Arc. THE PROBLEM: AI inference requires fair usage-based pricing — but traditional blockchain gas fees destroy the economics. Ethereum gas costs $2.50 per transaction, which is 1,667 times more than a $0.0015 inference charge. Even Polygon at $0.05 gas is 33 times the payment value. Sub-cent AI billing is economically impossible on traditional chains. THE SOLUTION: Circle Nanopayments on Arc eliminates gas overhead entirely. NeuroPay charges exactly $0.001 per 100 tokens in USDC, settled on Arc in under one second with zero gas cost — giving providers 100% margin retention. WHAT WE BUILT: - Real AI inference using Groq's Llama 3.1 model — actual tokens counted, actual billing - Circle Nanopayments integration for gas-free USDC settlement on Arc - Agent-to-agent payment simulation showing autonomous AI economics - Live transaction feed with Arc tx hashes verifiable on the block explorer - Cumulative analytics dashboard tracking USDC volume vs traditional gas costs - Full REST API with /infer, /deposit, /stats, /margin-analysis endpoints - Professional web dashboard showing real-time billing metrics KEY PROOF POINTS: - 55+ on-chain transactions demonstrated at sub-cent pricing - $0.00 gas cost across all transactions via Nanopayments - Traditional gas equivalent saved: $5.50+ across the demo - Margin multiplier: 66x more efficient than Polygon, 1,667x vs Ethereum NeuroPay proves that usage-based AI billing at machine scale is only viable on Arc with Circle Nanopayments — and shows exactly what the agentic economy infrastructure layer looks like.

QuantTrader

QuantTrader

Most retail traders don't lose because they lack access to data. They lose because they don't understand it. Automated trading tools have existed for years, but they share a fundamental flaw: they're black boxes. A signal fires, a position opens, and the trader has no idea why. When volatility spikes and the system keeps buying into a falling market, panic sets in. They override it. They revenge trade. They blow the account. I built QuantTrader to fix that specific failure mode. The architecture is split into two deliberate layers. First, a deterministic Python engine handles all signal generation, entry and exit points calculated mathematically, with zero LLM involvement. This layer doesn't hallucinate. It doesn't guess. It computes. Second, Llama 3.3, running on Groq for sub-second inference, takes those signals and the live exchange state, then generates a plain-English market thesis. Not a summary. A mentor's explanation. The kind of reasoning a seasoned trader would walk you through before placing a position. Execution is handled through the Kraken CLI via the Model Context Protocol (MCP), which keeps credentials environment-isolated and the integration clean. But the part I spent the most time on was safety. Three guardrails are hard-coded and non-negotiable. A 2% risk cap limits capital exposure per trade, institutional standard, enforced at the logic layer. A 3-loss circuit breaker halts all trading after consecutive losses, cutting off the revenge-trading spiral before it starts. A high-fidelity failover protocol preserves session context during API outages or exchange maintenance windows, so a dropped connection doesn't mean a lost position. QuantTrader isn't trying to be the fastest algo on the market. My priority was building something a real trader could actually trust, because they understand what it's doing and why.

InboxSentinel

InboxSentinel

Do you ever open your Gmail and struggle to extract critical information from the clutter created by informational emails, newsletters, promotions, OTPs, and verification messages? InboxSentinel solves this by acting as an autonomous AI intelligence layer over your inbox. Powered by an LLM, it periodically wakes up using OpenClaw’s autonomous cron job framework, scans your latest 5–10 emails, calculates an urgency score, and categorizes them into Personal, Informational, OTP, Verification, Fraud, Promotions, and more — without requiring any manual trigger. If an urgent email needs your response while you’re occupied, InboxSentinel reads it, generates a context-aware reply, and saves it directly into your drafts folder so you can quickly review and send it when convenient. OTPs and verification emails are only useful for a short 1–2 minute window; after that, they simply clutter your inbox. InboxSentinel intelligently detects such emails, assigns a safe 60-minute timer, and automatically moves them to trash after expiry — with full recoverability for 30 days to eliminate any risk. Fraud and purely informational emails are archived automatically, while promotions and newsletters are not discarded blindly; instead, InboxSentinel groups them, extracts meaningful offers (such as travel deals from platforms like MakeMyTrip), summarizes them, and sends you a single “Daily Digest” email so you can review everything important in minutes. Through OpenClaw’s scheduled, self-triggering cron architecture, the entire system wakes up periodically (a given time window like every 6 hours) and performs the job and goes back to sleep again.