
0
0
1 year of experience
I am an early-stage AI systems builder focused on designing and prototyping AI as infrastructure, not standalone chatbots. My work centers on integrating lightweight LLM inference into modular, hybrid systems that operate under strict resource constraints. I have hands-on experience running small LLMs as REST services (Flask-based), exposing AI via APIs, and connecting inference outputs to automation pipelines using tools such as Huginn, n8n, and Telegram bots. My systems follow a proto-agentic flow: input → analysis → filtering → recommendation → conditional execution, where AI functions as a decision-support node rather than a single source of truth. I primarily build in low-resource environments (Android + Termux, minimal cloud instances, improvised servers), which has shaped a strong architectural mindset around modularity, fault tolerance, and system boundaries. I am comfortable deploying hybrid setups across cloud, local, and messaging interfaces, and treating AI components as replaceable modules within a broader system. While I do not specialize in training or fine-tuning large models, my strength lies in AI system architecture, automation integration, and rapid prototyping, especially in constrained contexts. My long-term goal is to develop AI systems that function as scalable infrastructure for real-world workflows, education, and small-scale economic ecosystems.