
We built TurtleTalk around a lightweight, privacy-first AI architecture designed specifically for emotionally sensitive conversations with children. Instead of relying entirely on large cloud-hosted models, we fine-tuned Alibaba Cloud’s Qwen3-0.6B using 4-bit QLoRA optimization, enabling efficient on-device inference while maintaining strong conversational quality. The training pipeline was implemented through the Unsloth framework inside a containerized environment deployed on AMD Cloud infrastructure for accelerated experimentation and reproducibility. Our fine-tuning dataset and prompting strategy were built around established Social Emotional Learning (SEL) principles, focusing on emotional reflection, empathetic dialogue, self-awareness, emotional regulation, and curiosity-driven questioning for children. We optimized the model to support TurtleTalk’s three core interaction modes: Reflection Helper, Venting, and Big Questions. During training, the loss curve consistently decreased and later stabilized, indicating convergence toward reliable conversational behavior without severe overfitting. The resulting model checkpoint was successfully published to the Hugging Face Hub for versioning, reproducibility, and future community collaboration. A core technical decision behind TurtleTalk is local inference. By running the fine-tuned model directly on-device whenever possible, we significantly reduce response latency, creating a more natural conversational experience for children during emotionally sensitive moments. Local execution also minimizes the need to transmit personal conversations to external servers, providing stronger privacy guarantees and improving trust for both children and parents. This architecture positions TurtleTalk as an emotionally aware AI companion that is not only lightweight and responsive, but also fundamentally designed with safety, privacy, and child-centered interaction in mind.
10 May 2026

ArcSplit is building ownership rails for community-made AI and creative workflows. Today, the best workflows are rarely made by one company alone. They are built from grassroots contributions: community-trained LoRAs, reusable prompts, automations, extensions, datasets, assets, and workflow templates. But while these building blocks create real value, the money usually flows to the biggest platforms instead of the people who made the pieces. ArcSplit changes that. It turns every workflow into a payment graph, so when a workflow is used, the value can flow directly back to the contributors behind it. That means the author of a LoRA, the creator of a workflow, the builder of a tool, or the maker of an asset can all receive their share automatically, peer to peer, without relying on a centralized marketplace or third-party payout gate. The goal is simple: reward the small creators who make the ecosystem better. When contributors are paid every time their work is used, they have a reason to keep improving it, maintaining it, and sharing it with the community. That creates better workflows, better tools, and a healthier open ecosystem built on direct support instead of centralized extraction. ArcSplit is inspired by the creative ecosystems around extensions, plugins, and community assets, but reimagined for the age of AI agents and composable workflows. Community-built should also mean community-owned and community-paid.
26 Apr 2026