This is a repository that was made for a Hackathon orginized by Lablab.AI. The challenge was to create different types of agents that will carry our several tasks. Use the power of LLMs with LangChain and OpenAI to scan through your documents. Find information and insight's with lightning speed. ๐ Create new content with the support of state of the art language models and and voice command your way through your documents. ๐๏ธ""") st.write("We wills how you 5 different agents that we build\n" "1. **AssemblyAI Agent**\n" "2. **PandasAI Agent**\n" "3. **Presentation Agent**\n" "4. **README Agent**\n" "5. **Webscraping generator Agent**\n
Category tags:Business, Transportation and Delivery, Knowledge Base
We attempted to instill the deterministic, rule-based reasoning found in ELIZA into a more advanced, probabilistic model like an LLM. This serves a dual purpose: To introduce a controlled variable in the form of ELIZA's deterministic logic into the more "fuzzy" neural network-based systems. To create a synthetic dataset that can be used for various Natural Language Processing (NLP) tasks, beyond fine-tuning the LLM. [ https://huggingface.co/datasets/MIND-INTERFACES/ELIZA-EVOL-INSTRUCT ] [ https://www.kaggle.com/code/wjburns/pippa-filter/ ] ELIZA Implementation: We implemented the script meticulously retaining its original transformational grammar and keyword matching techniques. Synthetic Data Generation: ELIZA then generated dialogues based on a seed dataset. These dialogues simulated both sides of a conversation and were structured to include the reasoning steps ELIZA took to arrive at its responses. Fine-tuning: This synthetic dataset was then used to fine-tune the LLM. The LLM learned not just the structure of human-like responses but also the deterministic logic that went into crafting those responses. Validation: We subjected the fine-tuned LLM to a series of tests to ensure it had successfully integrated ELIZA's deterministic logic while retaining its ability to generate human-like text. Challenges Dataset Imbalance: During the process, we encountered issues related to data imbalance. Certain ELIZA responses occurred more frequently in the synthetic dataset, risking undue bias. We managed this through rigorous data preprocessing. Complexity Management: Handling two very different types of language modelsโrule-based and neural network-basedโposed its unique set of challenges. Significance This project offers insights into how the strength of classic models like ELIZA can be combined with modern neural network-based systems to produce a model that is both logically rigorous and contextually aware.
MIND INTERFACES
Our platform revolutionizes recruitment with personalized experiences for candidates and streamlined processes for employers. Challenges with traditional recruitment system are: 1. TIme consuming 2. Screening Hassles 3. Inconsistent result 4. In effective methods Solutions - AutoRecruit AI is a comprehensive and cutting-edge solution to these problems. It puts the candidate at the center of its process and applies a breakthrough llama-based algorithmic approach to achieve unprecedented accuracy at speeds that haven't been seen before! Features 1. Candidate sourcing 2. Resume parsing 3.Candidate scoring and summary 4. Personalized Engagement Benefits 1. Time Saving 2. Cost-Effective 3. Efficiency Optimization 4. Better Candidate Fit
AutoHire AI
Visionary Plates: Advancing License Plate Detection Models is a project driven by the ambition to revolutionize license plate recognition using cutting-edge object detection techniques. Our objective is to significantly enhance the accuracy and robustness of license plate detection systems, making them proficient in various real-world scenarios. By meticulously curating and labeling a diverse dataset, encompassing different lighting conditions, vehicle orientations, and environmental backgrounds, we have laid a strong foundation. Leveraging this dataset, we fine-tune the YOLOv8 model, an architecture renowned for its efficiency and accuracy. The model is trained on a carefully chosen set of parameters, optimizing it for a single classโlicense plates. Through iterative experimentation and meticulous fine-tuning, we address critical challenges encountered during this process. Our journey involves overcoming obstacles related to night vision scenarios and initial model performance, with innovative solutions like Sharpening and Gamma Control methods. We compare and analyze the performance of different models, including YOLOv5 and traditional computer vision methods, ultimately identifying YOLOv8 as the most effective choice for our specific use case. The entire training process, from dataset curation to model fine-tuning, is efficiently facilitated through the use of Lambda Cloud's powerful infrastructure, optimizing resources and time. The project's outcome, a well-trained model, is encapsulated for easy access and distribution in the 'run.zip' file. Visionary Plates strives to provide a reliable and accurate license plate detection system, with the potential to significantly impact areas such as traffic monitoring, parking management, and law enforcement. The project signifies our commitment to innovation, pushing the boundaries of object detection technology to create practical solutions that make a difference in the real world.
AI Avengers
๐ฃ Exciting News from Business Llama! ๐ ๐ We're thrilled to introduce "Business Llama: Optimized for Social Engagement," our latest project that's set to transform the way you approach business planning and go-to-market (GTM) strategies. ๐ ๐ค With the power of advanced, fine-tuned models, driven by the renowned Clarifai platform, we're taking your business strategies to the next level. Here's what you can expect: ๐ฏ Enhanced Decision-Making: Make smarter, data-driven decisions that lead to business success. ๐ Improved Business Plans: Develop robust and realistic plans backed by deep insights. ๐ Optimized Go-to-Market Strategies: Reach your target audience more effectively than ever before. ๐ Competitive Advantage: Stay ahead in the market by adapting quickly to changing conditions. ๐ฐ Resource Efficiency: Maximize resource allocation and reduce costs. ๐ค Personalization: Tailor your offerings to individual customer preferences. โ๏ธ Scalability: Apply successful strategies across various products and markets. ๐ก๏ธ Risk Mitigation: Identify and address potential risks proactively. ๐ Continuous Improvement: Keep your strategies aligned with evolving market conditions. Join us on this journey to elevate your business game! ๐ Stay tuned for updates and exciting insights. The future of business planning and GTM strategies is here, and it's more engaging than ever. ๐๐ผ #BusinessLlama #SocialEngagement #DataDrivenDecisions #Clarifai #GTMStrategies
Team Tonic
We present our solution Lec2Learn that works on Finetuning open source learning data for providing learning objectives. We start by obtaining all textbooks from opentextbookbc, we Process HTML to obtain the lecture and learning objectives, We then have pairs of lectures with their corresponding question groups, On the server we use Microsoft Phi 1.5 model and we fine tune it, We fine tune on the opentext data which is used so that model gets better at generating learning objectives, For the Prompt we give the lecture and learning objectives, we always start with Describe so model does not generate random data.
FineTuners
"Brilliant idea! It's fantastic to see such a complex project here, especially one created here in Barcelona. The concept is already addressing multiple problems, I would suggest focusing on a single vertical to delve deeper into the specific pain points of the users. This would provide a more targeted and impactful solution. Wishing you the best of luck with your project!"
Paulo Almeida
co-founder of Stunning Green
We attempted to instill the deterministic, rule-based reasoning found in ELIZA into a more advanced, probabilistic model like an LLM. This serves a dual purpose: To introduce a controlled variable in the form of ELIZA's deterministic logic into the more "fuzzy" neural network-based systems. To create a synthetic dataset that can be used for various Natural Language Processing (NLP) tasks, beyond fine-tuning the LLM. [ https://huggingface.co/datasets/MIND-INTERFACES/ELIZA-EVOL-INSTRUCT ] [ https://www.kaggle.com/code/wjburns/pippa-filter/ ] ELIZA Implementation: We implemented the script meticulously retaining its original transformational grammar and keyword matching techniques. Synthetic Data Generation: ELIZA then generated dialogues based on a seed dataset. These dialogues simulated both sides of a conversation and were structured to include the reasoning steps ELIZA took to arrive at its responses. Fine-tuning: This synthetic dataset was then used to fine-tune the LLM. The LLM learned not just the structure of human-like responses but also the deterministic logic that went into crafting those responses. Validation: We subjected the fine-tuned LLM to a series of tests to ensure it had successfully integrated ELIZA's deterministic logic while retaining its ability to generate human-like text. Challenges Dataset Imbalance: During the process, we encountered issues related to data imbalance. Certain ELIZA responses occurred more frequently in the synthetic dataset, risking undue bias. We managed this through rigorous data preprocessing. Complexity Management: Handling two very different types of language modelsโrule-based and neural network-basedโposed its unique set of challenges. Significance This project offers insights into how the strength of classic models like ELIZA can be combined with modern neural network-based systems to produce a model that is both logically rigorous and contextually aware.
MIND INTERFACES
Our platform revolutionizes recruitment with personalized experiences for candidates and streamlined processes for employers. Challenges with traditional recruitment system are: 1. TIme consuming 2. Screening Hassles 3. Inconsistent result 4. In effective methods Solutions - AutoRecruit AI is a comprehensive and cutting-edge solution to these problems. It puts the candidate at the center of its process and applies a breakthrough llama-based algorithmic approach to achieve unprecedented accuracy at speeds that haven't been seen before! Features 1. Candidate sourcing 2. Resume parsing 3.Candidate scoring and summary 4. Personalized Engagement Benefits 1. Time Saving 2. Cost-Effective 3. Efficiency Optimization 4. Better Candidate Fit
AutoHire AI
Visionary Plates: Advancing License Plate Detection Models is a project driven by the ambition to revolutionize license plate recognition using cutting-edge object detection techniques. Our objective is to significantly enhance the accuracy and robustness of license plate detection systems, making them proficient in various real-world scenarios. By meticulously curating and labeling a diverse dataset, encompassing different lighting conditions, vehicle orientations, and environmental backgrounds, we have laid a strong foundation. Leveraging this dataset, we fine-tune the YOLOv8 model, an architecture renowned for its efficiency and accuracy. The model is trained on a carefully chosen set of parameters, optimizing it for a single classโlicense plates. Through iterative experimentation and meticulous fine-tuning, we address critical challenges encountered during this process. Our journey involves overcoming obstacles related to night vision scenarios and initial model performance, with innovative solutions like Sharpening and Gamma Control methods. We compare and analyze the performance of different models, including YOLOv5 and traditional computer vision methods, ultimately identifying YOLOv8 as the most effective choice for our specific use case. The entire training process, from dataset curation to model fine-tuning, is efficiently facilitated through the use of Lambda Cloud's powerful infrastructure, optimizing resources and time. The project's outcome, a well-trained model, is encapsulated for easy access and distribution in the 'run.zip' file. Visionary Plates strives to provide a reliable and accurate license plate detection system, with the potential to significantly impact areas such as traffic monitoring, parking management, and law enforcement. The project signifies our commitment to innovation, pushing the boundaries of object detection technology to create practical solutions that make a difference in the real world.
AI Avengers
๐ฃ Exciting News from Business Llama! ๐ ๐ We're thrilled to introduce "Business Llama: Optimized for Social Engagement," our latest project that's set to transform the way you approach business planning and go-to-market (GTM) strategies. ๐ ๐ค With the power of advanced, fine-tuned models, driven by the renowned Clarifai platform, we're taking your business strategies to the next level. Here's what you can expect: ๐ฏ Enhanced Decision-Making: Make smarter, data-driven decisions that lead to business success. ๐ Improved Business Plans: Develop robust and realistic plans backed by deep insights. ๐ Optimized Go-to-Market Strategies: Reach your target audience more effectively than ever before. ๐ Competitive Advantage: Stay ahead in the market by adapting quickly to changing conditions. ๐ฐ Resource Efficiency: Maximize resource allocation and reduce costs. ๐ค Personalization: Tailor your offerings to individual customer preferences. โ๏ธ Scalability: Apply successful strategies across various products and markets. ๐ก๏ธ Risk Mitigation: Identify and address potential risks proactively. ๐ Continuous Improvement: Keep your strategies aligned with evolving market conditions. Join us on this journey to elevate your business game! ๐ Stay tuned for updates and exciting insights. The future of business planning and GTM strategies is here, and it's more engaging than ever. ๐๐ผ #BusinessLlama #SocialEngagement #DataDrivenDecisions #Clarifai #GTMStrategies
Team Tonic
We present our solution Lec2Learn that works on Finetuning open source learning data for providing learning objectives. We start by obtaining all textbooks from opentextbookbc, we Process HTML to obtain the lecture and learning objectives, We then have pairs of lectures with their corresponding question groups, On the server we use Microsoft Phi 1.5 model and we fine tune it, We fine tune on the opentext data which is used so that model gets better at generating learning objectives, For the Prompt we give the lecture and learning objectives, we always start with Describe so model does not generate random data.
FineTuners