Top Builders

Explore the top contributors showcasing the highest number of app submissions within our community.

Lobster Trap

Lobster Trap is a reverse proxy built by Veea that sits between AI agents and any OpenAI-compatible LLM backend. It performs deep prompt inspection (DPI) on both incoming prompts and outgoing responses, classifying threats and enforcing YAML-based firewall rules in sub-millisecond time using compiled regex patterns. No additional model calls, API keys, cloud connectivity, or runtime dependencies are required.

General
Release date18 Feb 2026
DeveloperVeea
TypeOpen-source LLM security proxy
LicenseMIT
GitHubveeainc/lobstertrap
DocumentationREADME and policy reference

Core Features

  • Regex-based DPI - all classification runs in sub-millisecond time using compiled regex patterns, with no secondary LLM calls for threat detection.
  • Bidirectional inspection - rules apply to both incoming prompts and outgoing responses, catching both injection attempts and exfiltration in responses.
  • Structured metadata extraction - detects and surfaces intent categories, risk scores, credentials, PII, system commands, injection attempts, exfiltration patterns, target paths, domains, and risky commands.
  • Programmable YAML policy - first-match-wins ingress and egress rules with actions: ALLOW, DENY, LOG, HUMAN_REVIEW, QUARANTINE, and RATE_LIMIT.
  • Declared vs. detected intent - agents can declare intent via _lobstertrap request headers; Lobster Trap compares declared against detected and reports mismatches in the audit trail.
  • Real-time dashboard - built-in web UI at http://localhost:8080/_lobstertrap/ showing live traffic, decisions, and metadata.
  • JSON-line audit logs - structured logs of every decision, readable by security tooling or a regulator.
  • Drop-in deployment - transparent proxy for any tool using the OpenAI chat completions API; no application code changes required.

Supported Backends

Lobster Trap works with any OpenAI-compatible inference endpoint:

BackendNotes
OllamaDefault target in quickstart config
vLLMCompatible via OpenAI-compatible API
llama.cppCompatible via server mode
OpenAIProxy to production OpenAI API
AnthropicVia OpenAI-compatible adapter
GeminiVia OpenAI-compatible adapter

Policy System

Policies are defined in YAML and loaded at startup. Each rule specifies a direction (ingress or egress), a priority, match conditions, and an action.

Available actions:

ActionBehavior
ALLOWPass the request through
DENYBlock and return an error
LOGAllow but write a log entry
HUMAN_REVIEWFlag for manual review queue
QUARANTINEIsolate for deferred inspection
RATE_LIMITThrottle matching traffic

A default policy file is provided at configs/default_policy.yaml as a starting point. Rules also support network policies and filesystem restrictions in addition to content-based matching.


Tools and Resources

  • GitHub repo (MIT) - source code, issues, and contribution guide.
  • README and quickstart - full policy reference and setup instructions.
  • Default policy - configs/default_policy.yaml in the repo, ready to fork and extend.
  • Adversarial test suite - run ./lobstertrap test to validate your policy against built-in attack patterns.
  • Single-prompt debugger - run ./lobstertrap inspect "<your prompt>" to see full metadata extraction output for any input.

Deployment Options

Three ways to run Lobster Trap:

  1. Standalone - clone the repo, run make build, then start with ./lobstertrap serve. Requires Go 1.22 or later.
  2. Pre-built static binary - download a Linux, Windows, or macOS binary from the repo with no Go toolchain required.
  3. Native.Builder - already packaged inside lablab's Native.Builder environment; no setup needed.

No API keys, signups, rate limits, or cloud dependency required for any deployment path.


Ecosystem and Integrations

  • Acts as the trust layer beneath multi-agent systems, enforcing per-agent permission boundaries and logging cross-agent interactions.
  • Serves as a foundation for compliance policy packs targeting HIPAA, SOC2, or financial regulations.
  • Integrates with governance dashboards and drift monitoring tooling via its structured JSON audit log output.
  • Supported by Veea engineers in the lablab Discord for policy review, integration help, and architecture questions during hackathon build phases.

Get started by cloning the repo and running ./lobstertrap serve, or download a pre-built binary from github.com/veeainc/lobstertrap. The full policy reference is in the README.

veea Lobster Trap AI technology Hackathon projects

Discover innovative solutions crafted with veea Lobster Trap AI technology, developed by our community members during our engaging hackathons.

PolicyForge — AI Agent Security Policy Platform

PolicyForge — AI Agent Security Policy Platform

PolicyForge is an enterprise AI agent security platform that solves a critical gap: security teams cannot write or manage AI agent policies because existing tools require deep YAML expertise. With PolicyForge, any CISO or compliance officer can type a security intent in plain English and have it enforced in seconds. How it works: The user types a natural language policy such as "Block any agent that reads patient SSN or medical records." Gemini 2.0 Flash instantly converts this into a Lobster Trap YAML enforcement rule. The rule is activated and enforced immediately by the Veea Lobster Trap deep prompt inspection proxy — a MIT-licensed tool that sits between AI agents and LLM backends. Key features include a real-time security dashboard showing blocked threats, active policies, and risk scores. An attack simulator lets teams fire 10 real adversarial attacks — prompt injection, PII exfiltration, credential theft, SQL injection, jailbreak attempts — and watch them get blocked live. One-click compliance reports generate HIPAA, SOC2, and finance audit documents directly from the audit trail. PolicyForge directly addresses every Track 1 focus area: guardrails and safety layers, monitoring and observability, access control, audit trails with explainability for regulated industries, and red-teaming frameworks. The tech stack uses Gemini 2.0 Flash for policy generation, Veea Lobster Trap for DPI enforcement, FastAPI for the backend, and Next.js 15 for the frontend — deployed on Railway and Vercel.

Aegis AI

Aegis AI

As enterprises and industrial sectors rapidly deploy autonomous AI agents and edge robotics, they expose themselves to novel, critical attack vectors such as advanced prompt injections, data exfiltration, and model denial-of-service (DoS) poisoning. Traditional security perimeters are insufficient for inspecting these dynamic, semantic payloads. Aegis AI bridges this critical security gap as an enterprise-grade SecOps firewall and autonomous edge proxy. Engineered in Go and Python, Aegis AI delivers sub-millisecond local enforcement, ensuring high-speed security without compromising operational latency. The platform's architecture is built on four core pillars: Edge-Native Proxy: Leveraging Veea's Lobster Trap, I deployed a high-performance local proxy that intercepts and sanitizes traffic directly at the edge, a crucial requirement for real-time robotics and localized AI agents. Autonomous Fuzzing Engine: Powered by Gemini, Aegis features a self-healing, continuous testing pipeline. It autonomously red-teams AI agents, proactively identifying vulnerabilities and dynamically generating defensive rules before zero-day exploits can be weaponized. Real-time Semantic Filtering: The system deeply inspects inbound and outbound payloads to neutralize complex prompt injection attacks and prevent unauthorized data exfiltration. Human-in-the-Loop Governance: A dedicated CISO staging queue quarantines highly anomalous or critical security events for manual oversight, ensuring strict enterprise governance and compliance. By combining proactive autonomous defense with robust edge-level proxying, Aegis AI provides the foundational security layer necessary for the safe, scaled adoption of AI agents in mission-critical environments.

Boardroom Agents

Boardroom Agents

Boardroom is an AI due diligence copilot that turns a pitch deck into a board-ready briefing in under two minutes. Upload a PDF, an image of a whiteboard, or paste a URL, and six specialized agents spin up to interrogate the inputs in parallel: an Orchestrator parses the materials and dispatches tasks; a Researcher pulls in market context; an Analyst builds the bull case; a Red Team builds the bear case; a Synthesizer fuses them into a confident executive brief with a verdict and a confidence score; and a Verifier audits every claim before it reaches the user. The differentiator is verification. Hallucinations in the M&A domain are not abstract — they cost real money. So every claim in the final brief is extracted and tagged with a role. Analyst claims are grounded against the source deck. Red Team rebuttals are graded against world knowledge, because a sharp critique is supposed to contradict the pitch. External context, like a named regulation or a macroeconomic condition, gets its own knowledge check. The result is an Integrity Score that rises when the Red Team is right rather than falling. On the same CocoaGuard pitch deck, our verifier evolved from flagging correct disease-feasibility analyses as seven-percent-confidence hallucinations to verifying them at ninety-eight percent — purely by understanding which voice was speaking. Under the hood: a Next.js frontend on Vercel, a FastAPI backend running in Docker on Vultr Cloud Compute, and six-agent orchestration through Gemini 3 Pro and Gemini 3 Flash with structured output and live streaming over Server-Sent Events. Caddy fronts the backend; Postgres persists sessions and audit trails. Built for the Vultr and Gemini tracks of the AI Agent Olympics.