Project Name: HireFit - AI-Enhanced CV Optimizer In today’s competitive job market, many candidates are unaware that their CVs are rejected by automated ATS (Applicant Tracking Systems) before they even reach human recruiters. The main reason for these rejections is the lack of alignment between the CV content and the specific job description. To solve this challenge, we developed HireFit, an AI-powered tool designed to automatically analyze your CV and compare it to the job description, identifying missing information and generating an optimized version of the CV for the best possible ATS score. With HireFit, users simply upload their CV and the job description. The tool uses advanced AI models (like Meta-Llama-3.1) to evaluate the CV content and identify gaps based on the job's requirements. It then regenerates the CV, filling in the necessary details and matching it to the job description, creating a professional CV optimized for ATS. In addition to this, HireFit provides interview preparation notes that highlight areas the candidate should focus on based on the changes made to the CV. These notes offer valuable insights into the missing skills or experience the candidate needs to prepare for, helping them be fully ready for their interview. Users can download the updated CV and interview preparation notes in various formats (PDF, DOCX), allowing easy customization and further edits according to their needs. HireFit ensures that candidates no longer face the disappointment of automated rejections and can confidently apply for jobs knowing their CV stands a better chance of being noticed.
We are excited to present our project, which focuses on addressing emergencies and environmental issues through an advanced AI-driven solution. In this hackathon, our team has developed an application that can generate accurate responses to a variety of emergency scenarios and environmental challenges. Project Overview: Model and Dataset: We utilized the LLaMA 3.1 model with 405B parameters to generate a synthetic dataset of approximately 2,000 question-answer pairs. This dataset was initially created in Excel and later converted into JSON format for model training. The TinyLLaMA 1.1 billion parameter chat version was fine-tuned using this dataset, allowing our model to provide highly contextual and relevant responses. Training and Fine-Tuning: We leveraged the resources available on Google Colab, specifically using T4 GPUs to generate the dataset. We leveraged the resources available on Kaggle, specifically using T4 x2 GPUs to train our model. After completing the fine-tuning process, we pushed the model to Hugging Face, making it accessible for deployment and further testing. Deployment: The model was deployed on Hugging Face Spaces, where we integrated a user-friendly Gradio UI interface. This interface enables users to input queries and receive real-time responses directly from the model. All project files and necessary documentation have been committed to our repository, ensuring full transparency and accessibility. Team: Our project was made possible by the collaborative efforts of a dedicated team of six members: Team Lead: Umar Majeed LinkedIn Profile Team Members: Moazzan Hassan LinkedIn Shahroz Butt LinkedIn Sidra Hammed LinkedIn Muskan Liaqat LinkedIn Sana Qaisar LinkedIn We would like to thank LabLab AI for this opportunity, and we look forward to the impact our application can make in real-world scenarios.