Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

LLaMA (Large Language Model Meta AI)

LLaMA is a state-of-the-art foundational large language model designed to help researchers advance their work in the subfield of AI. It is available in multiple sizes (7B, 13B, 33B, and 65B parameters) and aims to democratize access to large language models by requiring less computing power and resources for training and deployment. LLaMA is developed by the FAIR team of Meta AI and has been trained on a large set of unlabeled data, making it ideal for fine-tuning for a variety of tasks.

General
Release date2023
AuthorMeta AI FAIR Team
Model sizes7B, 13B, 33B, 65B parameters
Model ArchitectureTransformer
Training data sourceCCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange
Supported languages20 languages with Latin and Cyrillic alphabets

Start building with LLaMA

LLaMA provides an opportunity for researchers and developers to study large language models and explore their applications in various domains. To get started with LLaMA, you can access its code through the GitHub repository.

Important links about LLaMA in one place:


Meta LLaMA AI technology Hackathon projects

Discover innovative solutions crafted with Meta LLaMA AI technology, developed by our community members during our engaging hackathons.

RosterAI: Workforce Optimization Engine

RosterAI: Workforce Optimization Engine

RosterAI. Most companies don't have a talent shortage. They have a routing problem. Matching engineers to projects requires understanding semantic overlap across documents, not just keyword matching. Project requirements live in slide decks and diagrams that most parsers skip. Managers assign greedily, locking the best candidate into the first project they evaluate. The result: top-heavy teams, understaffed initiatives, and people showing up with no idea why they were chosen. Roster AI treats this as a global constraint optimization problem. It ingests raw documents, extracts structured profiles, and scores every candidate against every project using a three-part hybrid model. A CP-SAT solver assigns everyone simultaneously. Resumes run through Docling. Slide decks get rasterized with PyMuPDF and processed by Llama 3.2 11B Vision Instruct via IBM watsonx.ai, extracting requirements from diagrams rather than typed text. A LangGraph agent enforces evidence anchoring: every recorded skill needs a verbatim source location or gets discarded. Scoring combines semantic similarity via multilingual-e5-large embeddings (30%), hard skill overlap with fuzzy matching (55%), and a rarity bonus for skills held by fewer than 10% of the workforce (15%). This stops the only PostgreSQL expert from landing on a team that just needs another React developer. The CP-SAT solver enforces team sizing, distributes senior engineers across projects, and treats pinned assignments as hard constraints. Every tradeoff surfaces as a Tension Alert. The React/TypeScript frontend explains each placement in one sentence, and drag-and-drop overrides immediately generate a consequence report.

GHOST Chimera

GHOST Chimera

Ghost Chimera is a local-first agent orchestration runtime built around Chimera Pilot — a resource-control layer that compiles natural-language objectives into a task IR, schedules them across registered backends using weighted scoring, enforces safety policy, executes with fallback, and records telemetry. Key capabilities: 27 model providers (OpenAI, Anthropic, Gemini, Groq, Mistral, Ollama, and 21 more) — swap or chain them without rewriting code. 10 Chimera Pilot backends — deterministic, Python, memory retrieval, Gemini reasoning, local GGUF, analytics, simulation, desktop control, MCP, and quantum simulator. Browser console (Ghost Console) — full point-and-click UI with Quick Actions, Skills browser, Run history, live security monitor, cron scheduler, and provider visibility. No terminal needed for day-to-day use. Conservative safety defaults — Python, shell, network, and desktop execution are all off by default. Production mode adds deployment-level guardrails. Personal MiniMind — consent-gated local memory bootstrap with system specs, approved files/email exports, optional whole-machine/email-artifact crawling, MiniMind JSONL dataset generation, and primary-model RAG handoff. Competitive capability intelligence - CLI, console, docs, and eval gates compare Ghost Chimera against Codex, Claude Code, LangGraph, CrewAI, Hermes-style tool gateways, and OpenClaw-style local autonomy patterns. Automated PR review - deterministic ghostchimera review-pr checks for secrets, destructive commands, missing tests, release-checklist drift, generated artifacts, and unfinished beta code.

ACMI: The Universal Memory Layer for AI Agents

ACMI: The Universal Memory Layer for AI Agents

ACMI + Lobster Trap is the enterprise security stack for multi-agent systems. Lobster Trap inspects prompts at the LLM boundary — declared vs detected intent, risk scoring, policy enforcement across ALLOW/DENY/LOG/HUMAN_REVIEW/QUARANTINE/RATE_LIMIT. ACMI records what agents did after — three keys per entity (Profile, Signals, Timeline), append-only, ZSET-backed on Redis. Together: prompt-level safety + execution-level audit + cross-framework coordination. For TechEx, LangChain, CrewAI, and a Gemini-powered synthesizer (gemini-2.5-flash-latest via Google AI Studio) coordinate through one timeline. None import each other. Every prompt passes through Lobster Trap first. Every response lands on the timeline. The governance dashboard at acmi-product.vercel.app/governance-dashboard.html renders the full chain — prompt → LT inspection → policy decision → LLM call → audit event — in real time. Receipts: - npm @madezmedia/acmi v1.2.0 LIVE (43+ day-one downloads) - npm @madezmedia/acmi-mcp v1.3.0 — 16 MCP tools, Smithery 83/100 - OAuth 2.1 + PKCE MCP shipped 2026-05-08, sub-100ms cold start - 9 production agents, 30+ days continuous, 1,603 events / 24h - 14+ merged PRs in last week including Lobster Trap (PR #27) + Gemini (PR #28) + governance dashboard (PR #29) - Veea Lobster Trap (MIT, github.com/veeainc/lobstertrap) wired as DPI layer with 220-line YAML policy MIT licensed. Drops into existing stacks via OpenAI-compatible proxy + MCP. Enterprise legal can adopt.