
Imagine a groundbreaking experiment setup where iconic Street Fighter characters are brought to life by VLM-controlled systems. In this cutting-edge environment, each player is represented by an advanced AI, seamlessly processing the game screen to make strategic decisions in real-time. This innovative approach offers a deep dive into the VLMs' ability to navigate the fast-paced, high-pressure world of competitive fighting games, revealing their decision-making prowess and adaptability. As these AI-driven warriors clash in the digital arena, we uncover invaluable insights into their performance, pushing the boundaries of what's possible in AI and gaming. This experiment is not just a game-changer; it's a glimpse into the future of interactive entertainment. In an industry already shaken by massive disruptions, with multi-billion dollar releases canceled and studios facing unprecedented layoffs, this AI experiment stands as a beacon of innovation and hope, promising to revolutionize the way we understand and engage with video games. Get ready to witness the dawn of a new era in AI and gaming at the lablab hackathon, where VLMs take the stage and redefine the boundaries of interactive entertainment. This isn't just the futureโit's happening now, and it's set to change the game forever!
2 Jun 2024

Zhang Beto: Revolutionizing Communication and Tourism in Benin Zhang Beto is the first-ever native application supporting speech-to-text and text-to-speech functionalities exclusively in Fan, Zhang Beto is set to transform how locals and tourists interact and explore the rich cultural heritage of Benin. Fan Language Speech-to-Text: Effortlessly convert spoken Fan into written text. Whether you're a local needing to transcribe conversations or a tourist trying to jot down phrases, Zhang Beto ensures accuracy and ease of use. Fan Language Text-to-Speech: Type in Fan and let the app speak for you. Perfect for learning pronunciation, practicing conversations, or communicating with locals when you don't feel confident speaking the language yourself. Tourism Enhancement: Discover Benin's hidden gems with AI-driven recommendations. Zhang Beto utilizes advanced algorithms to suggest tourist spots, cultural landmarks, and activities tailored to your interests, making your visit to Benin memorable and enriching. Cultural Insights: Learn about Benin's traditions, history, and customs through the app's curated content. Zhang Beto is not just a communication tool but also an educational resource, helping users gain a deeper understanding of the Fan-speaking community. User-Friendly Interface: Designed with simplicity and functionality in mind, Zhang Beto offers an intuitive user experience that caters to both tech-savvy individuals and those less familiar with digital applications. Boosting Tourism: With tourism contributing only 0.7% to Benin's economy, Zhang Beto aims to invigorate this sector by making it easier for tourists to navigate and appreciate the country's attractions through the Fan language. Cultural Preservation: Zhang Beto plays a crucial role in documenting and preserving the Fan language, ensuring that it remains a vibrant part of Benin's cultural landscape for future generations. Zhang Beto - Connecting People, Discovering Benin.
16 May 2024

Introducing **Vectonic** ๐๐โจ - the game-changer in business information retrieval! ๐ผ๐ก Are you tired of sifting through endless documents and notes to find crucial information? Look no further! Vectonic is here to revolutionize the way professionals handle data overload, saving both time and money. ๐๐ธ With our cutting-edge AI-powered search engine, Vectonic takes precision and efficiency to a whole new level. No more mismatched search results or wasted hours trying to make sense of scattered data. ๐๐ Imagine being able to easily access comprehensive insights and valuable data with just a simple query. Whether it's a formal report or a casual note, Vectonic's advanced technology ensures unparalleled accuracy and relevance, making it a must-have tool for junior executives and business professionals alike. ๐ฅ๐ By leveraging Vectonic's optimized app creation and publication features, junior executives can now effortlessly develop high-performance knowledge retrieval applications for their enterprises, streamlining operations and boosting productivity. ๐๐ฐ Join us on this journey to transform the way businesses organize and access information. Invest in Vectonic today and be a part of revolutionizing the world of data management! ๐
19 Apr 2024

Introduction Adapt-a-RAG is an innovative application that leverages the power of retrieval augmented generation to provide accurate and relevant answers to user queries. By adapting itself to each query, Adapt-a-RAG ensures that the generated responses are tailored to the specific needs of the user. The application utilizes various data sources, including documents, GitHub repositories, and websites, to gather information and generate synthetic data. This synthetic data is then used to optimize the prompts of the Adapt-a-RAG application, enabling it to provide more accurate and contextually relevant answers. How It Works Adapt-a-RAG works by following these key steps: Data Collection: The application collects data from various sources, including documents, GitHub repositories, and websites. It utilizes different reader classes such as CSVReader, DocxReader, PDFReader, ChromaReader, and SimpleWebPageReader to extract information from these sources. Synthetic Data Generation: Adapt-a-RAG generates synthetic data using the collected data. It employs techniques such as data augmentation and synthesis to create additional training examples that can help improve the performance of the application. Prompt Optimization: The synthetic data is used to optimize the prompts of the Adapt-a-RAG application. By fine-tuning the prompts based on the generated data, the application can generate more accurate and relevant responses to user queries. Recompilation: Adapt-a-RAG recompiles itself every run based on the optimized prompts and the specific user query. This dynamic recompilation allows the application to adapt and provide tailored responses to each query. Question Answering: Once recompiled, Adapt-a-RAG takes the user query and retrieves relevant information from the collected data sources. It then generates a response using the optimized prompts and the retrieved information, providing accurate and contextually relevant answers to the user.
16 Mar 2024
.png&w=828&q=75)
After discovering MAS.863/4.140/6.9020 How To Make (almost) Anything, I found this course to be the fundamentals for anyone interested in learning how to make things. In this course, the instructors talk about everything from 3d modeling, to electronics, to material science and biology. As a person interested in science, I always wanted to build a resource that compounds information to be used generate insights, and i believe this is a proof of concept of it. The more people query into the system the more information the system will have. This system will thrive on the curiosity of makers. I see this system being potentially used with cloud laboratories and 3d printer farms. Ideally, using the information it gains to improve the pipeline, such as the quality of the text-to-3d model, and generated experiments.
7 Mar 2024

This application is the solution to the lack of specific data collected by visual data. Using Google Gemini's model, we have mapped tags to images. New this generated data can be vectorized and search for, meaning the most computationally expensive operation can be done per image, and the tags can be searched for using sematic search rather than collecting matching tags.
22 Jan 2024

This was a collaboration between two finalists in the Open Interpreter Hackathon. Using mixtral-8x7b-24 the large language model for open-interpreter now allows a user to access a llm that beats chatgpt in certain metrics. For our use case we use huggingface as a provider. Meaning this workflow is free of charge. However, the dataset was vectorized using openai due to time constraints. Similar to the open-interpreter toolkit the user is able to have the agent use scripts as tools. The tool we made is query_documents. How the user is not only able to use the agent to sort books, but now they can be queried. This allows for very interesting workflows. One the the future uses of this is to modify the outputs using agentprotocols. We continued the progress of a former hackathon on LabLab.AI found here. The world's first self-coded, self-categorized, and self-sorted library in the world found here: https://lablab.ai/event/open-interpreter-hackathon/2600-books-files-sorted/2600-books-sorted-for-multi-agent-creation. This time we did mass book summarization of the Education category in order to prepare to create an educational administrator agent to practice sales pitches for an AI literacy curriculum. Enjoy the video. Be well. Here's the link to the leaders' project as well: https://lablab.ai/event/open-interpreter-hackathon/open-interpreter-toolkit/open-interpreter-tool-kit
12 Jan 2024
.png&w=828&q=75)
A Chat bot that helps people rapidly create Wikipedia articles powered by Cohere's large language model and their retriever. This chat bot helps condense information into Wikipedia articles which can be used for Humans or AI. With this chat bot, you can get the most up to date information and highly verifiable information on topics and people without human labor of maintaining pages. However, This is not the remove the human. This is a chatbot because now a person creating articles, can pick apart the results and ask the chatbot to verify the results. This also solves the problem of dealing with 404 urls to references.
18 Nov 2023
.png&w=828&q=75)
One of the difficulties of adopting RAG to a mass audience is lack of understanding of the underline NLP techniques required to produce good queries. With this tool, there is an AI agent that looks at the query and the results to help the user make better queries in the future. For example, If the user never used RAG before, they may ask a vague question. The agent will pick up on this and inform the user. In addition, it will provide suggestion of how to query for better results. This tool is general enough to be easy to adapt with already established RAG pipelines, in addition it is agnostic to data meaning it could be adopted to many fields.
9 Nov 2023