A voice enabled AI for digital accessibility

Created by team RAHA on March 06, 2024

With the AI technologies growing in rocket speed, there is a large population globally getting no access to this AI diaspora. We want to include them with accessibility. Raha is designed to address the digital inclusion gap, using Large Multimodal Models (LMMs) to provide a voice interface for individuals without access to conventional AI technologies. It employs a model and tool-agnostic approach, ensuring compatibility with all desktop GUIs, including virtualized environments like Citrix and web interfaces. Raha uses Open Adapt, which's auto-prompted methodology, derived from human demonstrations, ensures grounded agents, minimizing errors in task execution. With a focus on practicality, Raha represents a step forward in making AI more universally accessible. There are 3 core stages, Voice Input Analysis, Foundation Models (e.g. GPT-4, ACT-1) which are powerful automation tools and OpenAdapt which connects Foundation Models to GUIs. Being tool agnostic, and an alternative approach to RPA, we are simplifying the whole digital journey for an individual with even digital illiteracy. Apart from this, enormous volumes of mental labor are wasted on repetitive GUI workflows, we see business scope in that too.

Category tags: