BlindNav is a navigation aid tailored for visually impaired individuals, combining state-of-the-art AI models for seamless interaction. By leveraging YOLOv10 for object detection and tracking, the app identifies both static and dynamic objects in real-time. Whisper captures voice input from users, while O1 generates context-aware, concise responses to user questions about their surroundings. Users can upload or capture images, and the app processes them to give auditory feedback based on object tracking results. BlindNav aims to provide an intuitive and practical navigation solution for visually impaired users, improving their independence and situational awareness.
Category tags:"excellent work"
Walaa Nasr Elghitany
Lablab Head Judge
"Definitely this idea has a long way to go, good work but I suggest train a small dataset using a basic computer vision model and compare it with the YOLOv10 results, this way you can diversify the results and give more solid comparison between the two. "
Muhammad Inaamullah
ML Engineer