11
5
United States
4 years of experience
Resilient Innovator in Semantic Web, AI, and Data Conversion | Don Duval: Advancing Transformative Projects Amidst Challenges and Racial Adversity Don Duval is a tenacious force in the fields of semantic web, AI, and data conversion, demonstrating adaptability and determination in the face of adversity. Beginning his career in curriculum development and grant reviewing, Duval's path took an unexpected turn upon discovering a rich collection of historical data detailing his family's experiences as Black Americans. Confronted by racial prejudice within the AI community, Duval utilized his expertise in OSINT and OPSEC, coupled with a robust background in graffiti art and involvement in a globally recognized graffiti crew, to establish three groundbreaking projects: TAIRL, MM and DS. Despite persistent challenges, these initiatives strive to promote inclusivity and empowerment on a national level. Relentlessly targeted by a hostile environment, Duval remains steadfast in his mission, drawing upon his vast knowledge and skills to uplift individuals from disadvantaged backgrounds, providing them with invaluable advantages in an increasingly competitive world. His latest endeavor involves the creation of an extensive knowledge share, aiming to empower those with limited resources, surpass the efforts of those who misuse AI, and contribute to a more equitable future. Duval's journey serves as a powerful testament to the indomitable spirit of innovation and the ability to forge a path forward in the face of unrelenting adversity. His tireless contributions continue to redefine the boundaries of his fields, challenging systemic racism and setting a precedent for inclusivity and excellence. Come help us innovate by contacting Don Duval personally at [email protected].
Our project was to run a low cost simultaneous series of agents that interact with the same environmental conditions and collaborate on the same output documents. We initially had the ambition to run 10 Upper Level Suite agents (long term themes/short term goals: 3 year to 3 month); 30 supporting agents (2 week check ups; daily repeat functions) but we were unable to get enough domain knowledge sets for this particular project. So we ran with the domain knowledge we had and eventually decided on testing a "Human - Machine Teaming" model that would be designed to help humans trust the power of the technology without it seeming threatening by identifying the sources of our domain knowledge sets, setting the map of their agenda, storing the information of their agenda in pinecone and synthesizing that with domain knowledge specific to the role. Also, Super AGI has a tool that allows for document modification and that allowed us to have the agents interact on the same document from multiple perspectives. The end result was actionable data with very few errors. The amount of work done in the time to actually set up the agents, set the map of the project and process goals for each agent was nothing compared to the amount of work we received from the agents. It was about 40 hours of labor for 4 people produced in 1 run which you can see the outputs in Github. Anyways, thank you for hosting the space. Be well.
We attempted to instill the deterministic, rule-based reasoning found in ELIZA into a more advanced, probabilistic model like an LLM. This serves a dual purpose: To introduce a controlled variable in the form of ELIZA's deterministic logic into the more "fuzzy" neural network-based systems. To create a synthetic dataset that can be used for various Natural Language Processing (NLP) tasks, beyond fine-tuning the LLM. [ https://huggingface.co/datasets/MIND-INTERFACES/ELIZA-EVOL-INSTRUCT ] [ https://www.kaggle.com/code/wjburns/pippa-filter/ ] ELIZA Implementation: We implemented the script meticulously retaining its original transformational grammar and keyword matching techniques. Synthetic Data Generation: ELIZA then generated dialogues based on a seed dataset. These dialogues simulated both sides of a conversation and were structured to include the reasoning steps ELIZA took to arrive at its responses. Fine-tuning: This synthetic dataset was then used to fine-tune the LLM. The LLM learned not just the structure of human-like responses but also the deterministic logic that went into crafting those responses. Validation: We subjected the fine-tuned LLM to a series of tests to ensure it had successfully integrated ELIZA's deterministic logic while retaining its ability to generate human-like text. Challenges Dataset Imbalance: During the process, we encountered issues related to data imbalance. Certain ELIZA responses occurred more frequently in the synthetic dataset, risking undue bias. We managed this through rigorous data preprocessing. Complexity Management: Handling two very different types of language modelsārule-based and neural network-basedāposed its unique set of challenges. Significance This project offers insights into how the strength of classic models like ELIZA can be combined with modern neural network-based systems to produce a model that is both logically rigorous and contextually aware.
In the neon-lit digital underworld, we wielded the code, Bash, and the terminal like a switchblade in a dark alley. With 2600 books jumbled in a main folder, chaos reigned. But then, we summoned OpenAI's API, a digital oracle, to decipher the cryptic hieroglyphs within those tomes. It read the tea leaves of text, determined the hidden truths, and neatly arranged them into categories, like cards in a deck. Each line of code a sharp stiletto, cutting through the chaos, and the terminal echoed with the hum of virtual triumph. In this digital noir, order emerged from chaos, and the API was our savior.