A Chat bot that helps people rapidly create Wikipedia articles powered by Cohere's large language model and their retriever. This chat bot helps condense information into Wikipedia articles which can be used for Humans or AI. With this chat bot, you can get the most up to date information and highly verifiable information on topics and people without human labor of maintaining pages. However, This is not the remove the human. This is a chatbot because now a person creating articles, can pick apart the results and ask the chatbot to verify the results. This also solves the problem of dealing with 404 urls to references.
18 Nov 2023
One of the most groundbreaking features of large language models are their ability use code to accomplish task. This is a library that leverages Open Interpreter to create and use scripts (tools) for solving problems. The scripts generated are designed to be reused and expanded upon. In addition, each script is well documented for agents to be able to determine if that script can provide the user with the desired answer or accomplish the desired goal. Being able to save code for feature use help minimize the cost of generating code for common task. It also allows for more robust scripts to be able to achieve more. Furthermore, the human, if so desired, can work with the agent to improve or build upon existing tools to account for the agent's shortcomings. Now any user can have an AI generated code base with code that works for their personal machine.
14 Oct 2023
One of the difficulties of adopting RAG to a mass audience is lack of understanding of the underline NLP techniques required to produce good queries. With this tool, there is an AI agent that looks at the query and the results to help the user make better queries in the future. For example, If the user never used RAG before, they may ask a vague question. The agent will pick up on this and inform the user. In addition, it will provide suggestion of how to query for better results. This tool is general enough to be easy to adapt with already established RAG pipelines, in addition it is agnostic to data meaning it could be adopted to many fields.
9 Nov 2023
This application is the solution to the lack of specific data collected by visual data. Using Google Gemini's model, we have mapped tags to images. New this generated data can be vectorized and search for, meaning the most computationally expensive operation can be done per image, and the tags can be searched for using sematic search rather than collecting matching tags.
22 Jan 2024
This was a collaboration between two finalists in the Open Interpreter Hackathon. Using mixtral-8x7b-24 the large language model for open-interpreter now allows a user to access a llm that beats chatgpt in certain metrics. For our use case we use huggingface as a provider. Meaning this workflow is free of charge. However, the dataset was vectorized using openai due to time constraints. Similar to the open-interpreter toolkit the user is able to have the agent use scripts as tools. The tool we made is query_documents. How the user is not only able to use the agent to sort books, but now they can be queried. This allows for very interesting workflows. One the the future uses of this is to modify the outputs using agentprotocols. We continued the progress of a former hackathon on LabLab.AI found here. The world's first self-coded, self-categorized, and self-sorted library in the world found here: https://lablab.ai/event/open-interpreter-hackathon/2600-books-files-sorted/2600-books-sorted-for-multi-agent-creation. This time we did mass book summarization of the Education category in order to prepare to create an educational administrator agent to practice sales pitches for an AI literacy curriculum. Enjoy the video. Be well. Here's the link to the leaders' project as well: https://lablab.ai/event/open-interpreter-hackathon/open-interpreter-toolkit/open-interpreter-tool-kit
12 Jan 2024
After discovering MAS.863/4.140/6.9020 How To Make (almost) Anything, I found this course to be the fundamentals for anyone interested in learning how to make things. In this course, the instructors talk about everything from 3d modeling, to electronics, to material science and biology. As a person interested in science, I always wanted to build a resource that compounds information to be used generate insights, and i believe this is a proof of concept of it. The more people query into the system the more information the system will have. This system will thrive on the curiosity of makers. I see this system being potentially used with cloud laboratories and 3d printer farms. Ideally, using the information it gains to improve the pipeline, such as the quality of the text-to-3d model, and generated experiments.
7 Mar 2024