4
4
Canada
3 years of experience
As a researcher have good publications in reputable journals and substantial teaching experience. Moreover, have recorded Object-Oriented Programming (OOP) lectures on YouTube and strong coding skills, showcased in my GitHub repository. Additionally, I have experience in blogging (OOP). Besides, I participated in international hackathons. Looking ahead, Aspirant to keep working on AI/ML research and publish my work in effective journals.
Vision Assist AI is a transformative project designed to significantly enhance the independence and quality of life for visually impaired individuals by harnessing cutting-edge AI technology. Utilizing the advanced capabilities of the LLaMA 3 model and GPT-4, this tool integrates sophisticated computer vision and machine learning algorithms to offer a wide range of assistance features, including real-time navigation guidance, text recognition, object identification, and face recognition. The need for such solutions is underscored by the global assistive technology market, which was valued at USD 19.8 billion in 2020 and is projected to reach USD 31.3 billion by 2027. This growth is driven by factors such as the rising prevalence of visual impairments, an aging population, advancements in AI, and supportive government initiatives. Vision Assist AI addresses critical challenges faced by visually impaired individuals, such as navigating unfamiliar environments, reading printed text, recognizing faces and objects, and reducing dependence on others for daily activities. By utilizing GPT-4 for object identification and feeding this information to LLaMA 3 for comprehensive assistance, our solution empowers users to perform daily tasks independently, enhances their personal safety, and significantly improves their overall quality of life. What sets Vision Assist AI apart is its advanced AI capabilities, user-centric design, and commitment to continuous improvement through user feedback. The tool not only promotes inclusivity by making public spaces more accessible but also ensures that visually impaired individuals can participate more fully in social and professional activities. By fostering a more inclusive society, Vision Assist AI is poised to transform the lives of millions, making a substantial impact on the global stage.
BlindAssist is a groundbreaking application tailored to empower blind and visually impaired individuals by integrating advanced artificial intelligence technologies. At its core, the application utilizes Falcon AI71 to process and analyze information from the user's surroundings, providing real-time, actionable insights. Through a combination of computer vision and natural language processing, BlindAssist transforms visual data into comprehensible text and audio feedback, enhancing the userβs ability to navigate and interact with their environment. The application aims to bridge accessibility gaps and offer greater independence and confidence to visually impaired users.
We propose using the LLaMA 3.1:1B model as a local proxy server to manage caches with JSON responses. Here's how the llama model can help us do just that: Query analysis and optimization Smart data management in the cache Optimization of communication with API Create intelligent cache management policies. Enrich responses and adding a layer of security and privacy to the application. Understand user behavior and tailor data to their needs. We can also approach the problem using the cloud model, when we do not have enough RAM to be able to run llama 3.1:1B. We can then send queries from time to time to the server, which would decide on the cache hierarchy, which would be the most important, and which items would already have a deletion time.