In this project, we confront the linguistic barriers faced by Yoruba speakers due to limited language resources. Image generation models primarily excel with English prompts, posing a challenge for non-English speakers. To address this, we embarked on a dual-track approach: data collection and model development. Firstly, recognizing the scarcity of Yoruba datasets, particularly in image generation prompts, we meticulously curated our own dataset. English sentences were carefully selected to serve as image generation prompts and then translated into Yoruba using a dictionary-based approach. Next, we developed a custom translator model trained specifically to translate Yoruba into English. This intermediary step ensures seamless integration with image generation models, allowing for smoother operation and accurate results. Through rigorous testing, we achieved an impressive 85% accuracy on the test set, affirming the efficacy of our approach. The core strength of our project lies in its ability to empower users to generate images in their native language without encountering language barriers. By collecting our own data and training custom models, we circumvent the limitations imposed by the scarcity of Yoruba resources. Leveraging the SDXL API for image generation further enhances the user experience, ensuring high-quality outputs. Looking ahead, we envision extending our efforts to include additional languages such as Fon and Dendi, expanding our dataset and catering to a broader audience. Furthermore, our ultimate goal is to develop a model capable of directly generating images from Yoruba, Fon, and Dendi without the need for translation into English. In summary, our project not only addresses a pressing need within the Yoruba-speaking community but also lays the groundwork for future advancements in multi-lingual image generation. Through our innovative approach, we pave the way for inclusive, barrier-free communication and creative expression.
ezAGI {easy Augmented Generative Intelligence} provides a comprehensive framework to develop modular, scalable and efficient AGI. Integrating multiple AI models ezAGI handles API management efficiently while managing memory effectively for continuous reasoning and interaction without user intervention. Components of ezAGI include SocraticReasoning, AGI, FundamentalAGI, LogicTables, OpenMind, memory management and API key management with multi-model support.ezAGI seamlessly integrates models from Together, Groq, and OpenAI to enhance any LLM with Continuous Autonomous Reasoning. ezAGI creates short term memory as an input/reponse constant. Leveraging internal reasoning and logic ezAGI will autonomously create decisions based on data inputs and predefined rules. ezAGI is a comprehensive framework for developing autonomous modular AGI systems. SocraticReasoning.py implements socratic reasoning to add premises and challenging them to draw_conclusion. agi.py handles learning from data to make_decision by initializing AGI as a chatter instance. memory provides the abiltity to learn from environmental data and store dialogue as history. automind.py manages environment interaction and response generation to . logic.py handles logical variables and expressions, generates truth tables, and validates truths, supporting ezAGI's reasoning. openmind.py provides an internal reasoning loop for continuous AGI operation, adding prompts reasoned from premise into processed conclusions while autonomously saving internal reasoning. memory.py manages memory storage, ensuring organized and persistent storage of short-term, long-term, and episodic memories. api.py uses the dotenv library for secure API key management, allowing dynamic integration with AI services. chatter.py provides input-response mechanisms for a multi-model environment including together, groq, and openAI ensuring robust and logical response. ezAGI augments the intelligence of large language models.