
Our project is an AI Web Application Firewall (AI WAF) designed to protect AI agents and LLM-powered applications from prompt injection, jailbreaking attempts, and malicious user inputs while preserving legitimate user interactions. As AI systems become more integrated into real-world decision-making and financial workflows, traditional security models are no longer sufficient. Our platform introduces a multi-layered AI-native security approach that validates inputs, monitors model behaviour, and verifies outputs before execution. The system is built using a Next.js frontend with serverless backend security pipelines, integrated with advanced AI models such as Gemini and machine learning-based risk scoring engines. Our agent is designed to be financially intelligent, meaning it understands context around financial data, sensitive operations, and high-risk actions, allowing it to prevent exploitation attempts that target financial logic or transaction workflows. Unlike traditional rule-based filters, our platform uses semantic embedding analysis, behavioural anomaly detection, and adaptive threat intelligence to identify evolving jailbreak techniques. The platform assigns real-time risk scores to prompts and agent actions, allowing safe prompts to pass while blocking or rewriting malicious ones. This ensures security without degrading user experience. The solution is deployed using cloud-native architecture for low-latency, real-time protection, making it suitable for enterprise AI deployments, fintech AI agents, customer-facing AI systems, and autonomous decision-making platforms. Our long-term vision is to build the foundational security layer that allows organizations to safely deploy AI agents on the public internet at scale.
7 Feb 2026