EduLlama tackles the challenge of solving complex, JEE-level math problems using AI. Large Language Models (LLMs), while excellent at text generation, often struggle with intricate mathematical reasoning and calculations. This project addresses this limitation by leveraging the power of Meta's open-source LLMs, specifically Llama 3.1 and 3.2 (including the vision-instruction models), accessed via Together AI's high-performance inference services. The core innovation lies in employing Together AI's Mixture of Agents (MoA) architecture. By combining the strengths of multiple Llama models, MoA overcomes the individual weaknesses of each model, generating significantly improved and accurate solutions. This approach rivals the performance of leading closed-source models like OpenAI's O1-preview and Anthropic's Claude-3.5-Sonnet models on complex mathematical tasks. This project further enhances the accuracy of the solution by integrating the Open Interpreter project, allowing the LLMs to execute code locally for precise calculations, thereby minimizing hallucinations and ensuring reliable results. The project breaks down complex problems into sub-problems, generating a step-by-step solution. Furthermore, it incorporates an interactive voice assistant, powered by Groq and ElevenLabs TTS, enabling users to ask follow-up questions about the solution using natural language and receive audio explanations. This creates a highly engaging and personalized learning experience, simulating a one-on-one interaction with a math tutor. The voice assistant leverages Whisper for accurate speech-to-text transcription, providing a seamless and intuitive user experience. This project demonstrates the potential of Meta's open-source LLMs combined with Together's MoA and fast inference for solving challenging mathematical problems, paving the way for accessible and interactive AI-powered education to Students with help of Open-Source Models.
Category tags: