16
1
United States
5 years of experience
In my current role, I have honed my skills as a Full Stack Software/Cloud Engineer, and a Systems Administrator. I have designed and engineered cloud architecture on Linux servers, led teams in redesigning client websites, and assisted more than 300 customers a week with debugging and troubleshooting software issues. I have further developed my ability to troubleshoot technical problems and coach others in Linux and Cloud Computing. I am proficient in a wide range of technical skills, including Cloud Formation, Systems Manager, Docker, Kubernetes, CI/CD (Jenkins), Ansible, Terraform, SQL, Python, Bash Scripts, and more. I am also pursuing CompTIA Certifications and GSEC | CISSP Certification to further enhance my expertise. Beyond my technical skills, I am highly coachable and eager to learn new things. I thrive in team environments and am known for my ability to collaborate effectively with diverse groups of individuals. I am excited about the opportunity to bring my unique blend of skills and experience.
We attempted to instill the deterministic, rule-based reasoning found in ELIZA into a more advanced, probabilistic model like an LLM. This serves a dual purpose: To introduce a controlled variable in the form of ELIZA's deterministic logic into the more "fuzzy" neural network-based systems. To create a synthetic dataset that can be used for various Natural Language Processing (NLP) tasks, beyond fine-tuning the LLM. [ https://huggingface.co/datasets/MIND-INTERFACES/ELIZA-EVOL-INSTRUCT ] [ https://www.kaggle.com/code/wjburns/pippa-filter/ ] ELIZA Implementation: We implemented the script meticulously retaining its original transformational grammar and keyword matching techniques. Synthetic Data Generation: ELIZA then generated dialogues based on a seed dataset. These dialogues simulated both sides of a conversation and were structured to include the reasoning steps ELIZA took to arrive at its responses. Fine-tuning: This synthetic dataset was then used to fine-tune the LLM. The LLM learned not just the structure of human-like responses but also the deterministic logic that went into crafting those responses. Validation: We subjected the fine-tuned LLM to a series of tests to ensure it had successfully integrated ELIZA's deterministic logic while retaining its ability to generate human-like text. Challenges Dataset Imbalance: During the process, we encountered issues related to data imbalance. Certain ELIZA responses occurred more frequently in the synthetic dataset, risking undue bias. We managed this through rigorous data preprocessing. Complexity Management: Handling two very different types of language modelsārule-based and neural network-basedāposed its unique set of challenges. Significance This project offers insights into how the strength of classic models like ELIZA can be combined with modern neural network-based systems to produce a model that is both logically rigorous and contextually aware.