3
3
United States
5 years of experience
By combining a custom conversant persona and grounded QA, Health-E is able to aid in processing patients, extracting relevant information to fill out forms, and providing advice common knowledge advice on applicable scenarios if prompted with a question. We hope this technology helps cut down healthcare costs by reducing queue times, paperwork, and increasing efficiency overall. Moreover, we envision applications of Health-E's underlying technology beyond the healthcare industry. In particular, regional economics and local communities in developing countries, that now have wide internet access, could greatly benefit from virtual advisors or teachers to supplement the lack of labor supply or poor level of education. We are really proud of what we built and we hope you enjoy both our presentation as well as experimenting with Health-E.
Do you desire to learn languages with the same speed and efficiency as the renowned polyglot XiaomaNyc? Look no further! With his method of immersive learning, you can dive headfirst into language acquisition and master new languages in an astonishingly short amount of time. Moreover, imagine having the unique opportunity to be tutored by none other than your own voice! This is made possible with a concept called prompt chaining and conversation design to help guide a conversation to output exactly what we need to make incredible custom built lesson plans. This project uses Eleven labs, Voiceflow, GPT4, React JS, and whisper API. to make this wonderful experience.
In-Car AI Agents is an innovative project designed to make in-car artificial intelligence smarter and more responsive in offline scenarios. Currently, most AI-powered car assistants rely on internet connectivity to process commands, leaving drivers without key functionalities in areas with poor or no network coverage. Our solution aims to address this gap by deploying edge-based AI models that can function without Wi-Fi, allowing drivers to interact with their cars using voice commands for basic operations such as controlling the air conditioning, navigation prompts, and music playback. The system uses pre-trained voice recognition models and offline edge computing to deliver real-time responses to the driverβs requests, all processed locally. Additionally, the project explores simple computer vision tasks like lane detection, integrated to run without network dependency. Our vision is to make cars smarter, safer, and more autonomous without requiring constant internet access. By providing an MVP, we aim to showcase how AI in cars can be optimized for offline use, thus improving the overall driving experience in areas where internet connectivity is unreliable.