12
5
United States
4 years of experience
Since the pandemic started, I’ve been creating something most people wouldn’t dream of: a digital Ontology of Upward Mobility—a defense mechanism built on data most ignore. Sifting through court records and the skeletons buried in the education system, I pulled together a framework for survival and self-protection. From this came a card game, a deck of questions, each one sharp enough to peel back layers people usually hide. Some of these questions go straight for trauma, stigma, the reasons people lie, especially in dating. It’s a game designed to expose deception by cornering people into confronting what they’d rather keep hidden. And then came the library: a digital arsenal of 2,600 books, each linked to the answers people give in the game, a self-expanding network of information. This isn’t just a collection; it’s a library tailored to its user, shaped by personal insights, each title pointing toward a deeper understanding of how to navigate, survive, protect. I made it as a finalist at lablab.ai with this concept, but that’s not the point. Now, I’m showing others how to build their own defenses—teaching ontology, knowledge management, guiding people to construct their own personal AI libraries, frameworks that don’t just inform but shield. My books walk them through it, helping them shape a digital armor loaded with answers, ready for the complexities they’ll face. This is why I fit this role: I understand what it means to protect, to make technology a barrier against deception and hidden agendas. AI, as I’ve designed it, isn’t just a tool—it’s a weapon against the darkness people carry, a way to see, to safeguard, to stay ahead of what would otherwise consume them.
Our project was to run a low cost simultaneous series of agents that interact with the same environmental conditions and collaborate on the same output documents. We initially had the ambition to run 10 Upper Level Suite agents (long term themes/short term goals: 3 year to 3 month); 30 supporting agents (2 week check ups; daily repeat functions) but we were unable to get enough domain knowledge sets for this particular project. So we ran with the domain knowledge we had and eventually decided on testing a "Human - Machine Teaming" model that would be designed to help humans trust the power of the technology without it seeming threatening by identifying the sources of our domain knowledge sets, setting the map of their agenda, storing the information of their agenda in pinecone and synthesizing that with domain knowledge specific to the role. Also, Super AGI has a tool that allows for document modification and that allowed us to have the agents interact on the same document from multiple perspectives. The end result was actionable data with very few errors. The amount of work done in the time to actually set up the agents, set the map of the project and process goals for each agent was nothing compared to the amount of work we received from the agents. It was about 40 hours of labor for 4 people produced in 1 run which you can see the outputs in Github. Anyways, thank you for hosting the space. Be well.
21 Aug 2023
We attempted to instill the deterministic, rule-based reasoning found in ELIZA into a more advanced, probabilistic model like an LLM. This serves a dual purpose: To introduce a controlled variable in the form of ELIZA's deterministic logic into the more "fuzzy" neural network-based systems. To create a synthetic dataset that can be used for various Natural Language Processing (NLP) tasks, beyond fine-tuning the LLM. [ https://huggingface.co/datasets/MIND-INTERFACES/ELIZA-EVOL-INSTRUCT ] [ https://www.kaggle.com/code/wjburns/pippa-filter/ ] ELIZA Implementation: We implemented the script meticulously retaining its original transformational grammar and keyword matching techniques. Synthetic Data Generation: ELIZA then generated dialogues based on a seed dataset. These dialogues simulated both sides of a conversation and were structured to include the reasoning steps ELIZA took to arrive at its responses. Fine-tuning: This synthetic dataset was then used to fine-tune the LLM. The LLM learned not just the structure of human-like responses but also the deterministic logic that went into crafting those responses. Validation: We subjected the fine-tuned LLM to a series of tests to ensure it had successfully integrated ELIZA's deterministic logic while retaining its ability to generate human-like text. Challenges Dataset Imbalance: During the process, we encountered issues related to data imbalance. Certain ELIZA responses occurred more frequently in the synthetic dataset, risking undue bias. We managed this through rigorous data preprocessing. Complexity Management: Handling two very different types of language models—rule-based and neural network-based—posed its unique set of challenges. Significance This project offers insights into how the strength of classic models like ELIZA can be combined with modern neural network-based systems to produce a model that is both logically rigorous and contextually aware.
15 Sep 2023
In the neon-lit digital underworld, we wielded the code, Bash, and the terminal like a switchblade in a dark alley. With 2600 books jumbled in a main folder, chaos reigned. But then, we summoned OpenAI's API, a digital oracle, to decipher the cryptic hieroglyphs within those tomes. It read the tea leaves of text, determined the hidden truths, and neatly arranged them into categories, like cards in a deck. Each line of code a sharp stiletto, cutting through the chaos, and the terminal echoed with the hum of virtual triumph. In this digital noir, order emerged from chaos, and the API was our savior.
14 Oct 2023
This was a collaboration between two finalists in the Open Interpreter Hackathon. Using mixtral-8x7b-24 the large language model for open-interpreter now allows a user to access a llm that beats chatgpt in certain metrics. For our use case we use huggingface as a provider. Meaning this workflow is free of charge. However, the dataset was vectorized using openai due to time constraints. Similar to the open-interpreter toolkit the user is able to have the agent use scripts as tools. The tool we made is query_documents. How the user is not only able to use the agent to sort books, but now they can be queried. This allows for very interesting workflows. One the the future uses of this is to modify the outputs using agentprotocols. We continued the progress of a former hackathon on LabLab.AI found here. The world's first self-coded, self-categorized, and self-sorted library in the world found here: https://lablab.ai/event/open-interpreter-hackathon/2600-books-files-sorted/2600-books-sorted-for-multi-agent-creation. This time we did mass book summarization of the Education category in order to prepare to create an educational administrator agent to practice sales pitches for an AI literacy curriculum. Enjoy the video. Be well. Here's the link to the leaders' project as well: https://lablab.ai/event/open-interpreter-hackathon/open-interpreter-toolkit/open-interpreter-tool-kit
12 Jan 2024
Imagine a groundbreaking experiment setup where iconic Street Fighter characters are brought to life by VLM-controlled systems. In this cutting-edge environment, each player is represented by an advanced AI, seamlessly processing the game screen to make strategic decisions in real-time. This innovative approach offers a deep dive into the VLMs' ability to navigate the fast-paced, high-pressure world of competitive fighting games, revealing their decision-making prowess and adaptability. As these AI-driven warriors clash in the digital arena, we uncover invaluable insights into their performance, pushing the boundaries of what's possible in AI and gaming. This experiment is not just a game-changer; it's a glimpse into the future of interactive entertainment. In an industry already shaken by massive disruptions, with multi-billion dollar releases canceled and studios facing unprecedented layoffs, this AI experiment stands as a beacon of innovation and hope, promising to revolutionize the way we understand and engage with video games. Get ready to witness the dawn of a new era in AI and gaming at the lablab hackathon, where VLMs take the stage and redefine the boundaries of interactive entertainment. This isn't just the future—it's happening now, and it's set to change the game forever!
2 Jun 2024