
Lokr Assistant is a multi-agent AI pipeline that acts as a senior engineering copilot — diagnosing bugs, reviewing diffs, and gating deployments using verified evidence from your actual codebase. Unlike generic AI tools that hallucinate fixes, Lokr Assistant is grounded in Lokr — a Graph-RAG static analysis engine. It uses Tree-sitter to parse codebases into ASTs, maps dependencies into a NetworkX graph, and indexes nodes via ChromaDB for semantic retrieval. Every diagnosis references real code paths, not guesses. The system runs a 4-agent pipeline with cascading skepticism: Analyzer — Diagnoses bugs using verified context from Lokr's dependency graph. Starts with ~800 tokens and autonomously requests more via Graph-RAG. Action Agent — Generates patches or blocker lists. Cross-references raw input against the Analyzer to catch dropped details. Safety Agent — Evaluates risk with go/no-go decisions. Provides targeted revision suggestions routed directly back to Action, saving ~70% of tokens vs full restart. Validator Validates fixes through execution tracing or generates deploy checklists. Triggers revision loops on failure. Key achievements: Deterministic Security Pre-Scan: Regex-based backdoor scanner catches debug headers and hardcoded role bypasses before the LLM runs impossible to hallucinate away. Mental Execution of Boolean Logic: Detects when || → && weakens validation, catching regressions that pass syntax checks. Strict Schema Validation: Agents raise errors on malformed LLM output — no silent stub data polluting the pipeline. Programmatic Safety Net: Hardcoded orchestrator rules override LLM decisions when security blockers are detected. Agentic Context Discovery: Analyzer autonomously fetches file details via Graph-RAG instead of dumping 21k+ tokens upfront. Tested across 6 scenarios IDOR flaws, logic regressions, performance degradation, validation weakening, migration failures, and multi-blocker deployments passing all with a 7B model on 6GB VRAM.
10 May 2026