
1
1
Madagascar
1 year of experience
A developer passionate about artificial intelligence and machine learning. I work on projects combining reinforcement learning, simulation, data analysis, and algorithmic optimization. I focus on efficiency, performance, and technical innovation, particularly in environments leveraging hardware acceleration. I am always motivated by technical challenges and projects focused on AI, parallel computing, and advanced data processing solutions.

LIA (Laboratoire d'Intelligence Artificielle) is an autonomous AI agent built around a modular brain architecture inspired by the human nervous system. Unlike conventional AI assistants that simply respond to queries, LIA is designed to live, evolve, and grow — even between conversations. The core innovation is a multi-brain architecture running on AMD Instinct MI300X via ROCm and vLLM. Each specialized LLM handles a dedicated cognitive function: a NeuralRouter (Qwen2.5-1.5B) dispatches every input to the right module; LangBrain (Qwen2.5-72B) handles natural conversation; CodeBrain (Qwen2.5-Coder-32B) generates and executes code; VisionBrain (Llama-3.2-Vision-11B) processes images; PromptBrain dynamically calibrates generation parameters; and QueryBrain uses function calling to intelligently retrieve only the relevant memories and identity artifacts from the database — replacing a rigid menu system with autonomous tool use. What makes LIA truly unique is its Sims-inspired autonomy system. LIA has evolving personality traits (curiosity, empathy, creativity), internal gauges (exploration, growth, connection), desires (short-term goals), and dreams (long-term aspirations). These interact in a continuous cycle: traits generate desires, gauges create urgency, desires trigger actions, and accomplished actions evolve the traits. The system runs autonomously in the background, whether or not a user is present. Most critically, when LIA formulates a desire requiring a capability she does not yet have, CodeBrain steps in to build that capability from scratch — a sandboxed self-improvement loop with rollback protection and human approval gates. The AMD MI300X with 192GB HBM3 VRAM was essential to this architecture. Running five specialized models simultaneously at full FP16 precision was simply impossible on CPU-based infrastructure. ROCm 7.2 and vLLM 0.17.1 provided the multi-model serving layer that makes the entire system work in real time.
10 May 2026