
BrainSkribbl is an AI-powered platform for improving educational podcasts through brain-guided content analysis and refinement. The project combines large language models with neural engagement prediction to help creators produce explanations that are more engaging, memorable, and cognitively effective. Using TRIBE v2 brain encoding data, we fine-tuned a Qwen2.5 model to better understand patterns associated with audience attention and neural response. This creates a brain-guided LLM fine-tuning pipeline where predicted brain activity becomes a feedback signal for generating and refining educational content. GPU Infrastructure and Work: • Optimized and modified the TRIBE v2 source code for AMD GPU acceleration and ROCm compatibility. • Fine-tuned Qwen2.5 models on AMD GPU hardware using TRIBE v2 brain encoding data. • Integrated Llama models as part of the TRIBE v2 inference • Developed and hosted a web application for podcast script and audio evaluation on HuggingFace using a Cloudflare tunnel powered by AMD MI300x GPU inference. The platform can evaluate podcast scripts and audio recordings, estimate neural engagement over time, and suggest improvements to pacing, clarity, structure, and wording using multimodal signals derived from TRIBE v2. Instead of relying only on traditional readability metrics or subjective feedback, BrainSkribbl introduces biologically inspired optimization by modeling how people may cognitively respond to educational material. The system is designed for educators, researchers, podcast creators, and edtech developers who want to create learning experiences that are both informative and deeply engaging.
10 May 2026