In my country, there exists an inequality affecting visually impaired individuals who lack access to essential accessibility services. This has driven me to create a mobile app that can harness the power of AI to offer a transformative solution. By enabling the visually impaired to understand the world around them, this app directly aligns with two crucial United Nations Sustainable Development Goals (UNSDGs): Goal 3 - "Good Health and Well-being," and Goal 10 - "Reduced Inequalities." The app is constructed using the Flutter framework for the frontend, while the backend relies on Google AI Studio's Gemini Pro-Vision Model, accompanied with continuous Trulens Evaluation of LLM performance in Gemini-Lens' Hosted Fast-API server on Google Cloud Run Service. The app has a very simple user interface: a camera preview and a mic button for them to speak up, then the query transcribed, along with the image captured will be processed and the response will be spoken back to blind users.
Category tags: