
3
1
1 year of experience
Applied AI Engineer building agentic LLM applications, RAG pipelines, and automation workflows. I specialize in turning complex AI challenges into production-ready solutions. Currently hacking an AI Security platform to protect AI agents and LLMs from prompt injections while continuously discovering vulnerabilities—demonstrating multi-layered AI guardrails and autonomous pentesting. Passionate about building innovative AI prototypes that are safe, scalable, and impactful.

As organizations rapidly deploy AI agents and LLM-powered applications, new security risks emerge that traditional web security tools cannot address. Prompt injection and adversarial inputs can manipulate AI systems into leaking sensitive data, bypassing safeguards, or executing unintended actions. Since LLMs are designed to interpret natural language flexibly, distinguishing malicious intent from legitimate user input becomes a major challenge. Deriv AI Shield addresses this gap by introducing an AI-native Web Application Firewall tailored specifically for AI systems. The platform monitors and filters user prompts before they reach the LLM, detects jailbreak and prompt injection attempts, prevents sensitive data leakage through output guarding, and continuously analyzes behavior patterns to detect anomalies. In addition, the system includes an autonomous pentesting module that simulates real attack scenarios to identify vulnerabilities and measure security resilience. It generates actionable security insights and mitigation recommendations, enabling teams to strengthen defenses proactively. By combining real-time guardrails, behavioral monitoring, and continuous AI-driven security testing, Deriv AI Shield enables organizations to deploy AI agents safely in production environments without compromising functionality or user experience.
7 Feb 2026