Browse applications built on Google Chirp technology. Explore PoC and MVP applications created by our community and discover innovative use cases for Google Chirp technology.
This project is an automated phone system that converts incoming voice calls into text and passes the transcribed message to an AI language model. The language model, or LLM, is connected to a vector database that contains information about a specific product. The LLM is powered by LangChain, a framework for developing applications powered by language models. LangChain connects the LLM to the vector database and allows it to interact with its environment. When a customer calls, their voice is transcribed into text in real-time and fed into the LLM. The LLM processes the text, retrieves relevant information about the product from the vector database, and generates a response using LangChain. This response is then converted back into speech by using AI Eleven labs api and played to the customer over the phone. This system allows for efficient and accurate handling of customer inquiries without the need for human intervention.
Ermyth is an AI-driven system which listens to your stories. As you speak, Ermyth will generate visuals fitting the events you describe. In doing so, it immerses the user into an interactive narration. In the background, an Emotion Recognition System monitors the user’s affective state. This enables the system to provide coaching aimed to improve the resilience and feelings of safety of the user. The project we aim to build consists of a system which listens to the user and generates images fitting the story the user is narrating. Additionally, automatic emotion recognition (AER) is performed in the background. The AI model (PaLM) is tasked with interacting with the user when needed. The interactions are aimed to contribute to the user’s feelings of safety, while they face topics of different type. To do so, the AI will play the role of a character within the story, who helps the user to face problematic topics by inviting them to reflect and optionally relax when AER reaches sufficiently negative valence. Conversely, image generation will be mitigating the influence of negative emotions on visuals, while enhancing positive emotions. This is meant to create a positive feedback loop, which aims to boost resilience, emotional awareness and psychological safety.
ntroducing TalkToMe, a groundbreaking web application that revolutionizes the way we engage with podcasts, books, and various forms of documentation. Gone are the days of passive consumption; now, we enter a realm of interactivity and immersion. TalkToMe employs cutting-edge technologies, harnessing the power of advanced Large Language Models, Speech-to-Text, and Vision models provided by Google Cloud Services. This amalgamation of state-of-the-art AI enables us to deliver an unparalleled user experience. Imagine effortlessly uploading audio files, books, PDFs, or any content of your choosing, triggering the creation of a dynamic ChatSession. Our web-app embarks on an intellectual journey through the depths of your uploaded material, extracting its very essence and comprehending its context. This deep understanding empowers TalkToMe to provide you with insightful responses to your queries. It's an interactive symphony. Utilizing intuitive speech interaction, you can actively engage with the ChatSession, asking questions that penetrate the core of the content. Prepare to be amazed as TalkToMe offers concise and informative answers, guiding you on an intellectual odyssey. But TalkToMe doesn't stop there; its capabilities transcend conventional boundaries. Summarization becomes effortless, distilling the essence of lengthy material into digestible nuggets of wisdom. General comparisons unveil hidden truths, shedding light on similarities and disparities. The world becomes your intellectual playground as TalkToMe empowers you to embark on an all-encompassing exploration of knowledge. Unlock the true potential of your chosen materials with TalkToMe, transforming them into interactive companions on your journey of discovery. Immerse yourself in a realm where learning and enjoyment converge, where the boundaries between content and consumer dissolve. Embrace the future of interactive content consumption and join us as we rewrite the rules of engagement.
SmrtEd is an innovative web-based platform that revolutionizes the learning experience for students. It offers advanced features to enhance presentation creation, note-taking, and interactive learning. With customizable templates and multimedia integration, students can create visually appealing presentations with ease. The AI-powered audio-to-notes conversion feature automates the extraction of key concepts and timestamps from audio, saving time and enhancing study efficiency. SmrtEd's quiz creation tool enables students to transform their notes into interactive quizzes for active learning and self-assessment. Collaboration is fostered through seamless sharing of presentations, notes, and quizzes among students. SmrtEd caters to students at all education levels and supports tailored versions for institutions. Pricing options include a Basic Plan with free access, a Student Plan at $9.99 per month, and an Institution/Organization Plan with custom pricing. The platform is promoted through targeted digital marketing, strategic partnerships, social media engagement, and referral programs. SmrtEd empowers students to create captivating presentations, generate comprehensive notes, and engage in interactive quizzes. It revolutionizes the way students consume and engage with educational content, fostering effective learning, collaboration, and knowledge retention.
Communication barriers and challenges exist for individuals who are deaf, hearing-impaired, or have difficulty making phone calls. These individuals may face limitations in understanding spoken language, maintaining focus, managing distractions, and effectively participating in phone conversations. Additionally, introverts may experience discomfort or anxiety when engaging in verbal communication. These factors hinder inclusivity, independence, and effective communication for these user groups. Solution: Our product, ConvoAI, offers a transformative solution to address these challenges. By harnessing the power of AI voice recognition, content generation, and real-time assistance, ConvoAI enables individuals to make phone calls with ease, confidence, and enhanced communication capabilities. The key features and benefits of ConvoAI include: Content Generation and Recommendations: ConvoAI generates AI-powered responses, prompts, and suggestions, reducing the need for constant input from the user and promoting engaging and smooth conversation flow. Personalized Experience: ConvoAI can be tailored to individual preferences, including language settings, visual cues, and content generation options, providing a personalized and comfortable communication environment. Time Management and Summaries: ConvoAI helps users manage call duration, offers time-related prompts, and provides post-call summaries of key points, action items, and important details discussed. By leveraging these powerful features, ConvoAI empowers deaf, hearing-impaired, introverts, and other individuals who face communication challenges to engage in phone conversations with confidence, independence, and improved comprehension. Our product enhances inclusivity, fosters effective communication, and ultimately enriches the lives of users by breaking down communication barriers.
OMORI helps to create software business analysis artefacts 2-3 times faster and optimise costs for AI tools with: - Tailored AI Tools: at Omori we search and try new AI tools, integrate and tune them to be specifically usefull for software business analysis tasks. - Unified Framework: OMORI is a framework for BAs that helps to create software analytical artefacts faster and in a the single place with all new AI tools under the hood. This results in a streamlined workflow for business analysts. - Cost and Time Efficiency: Omori optimises costs allowing to pay once and use all AI tools integrated Features of the MVP : - create text from user interviews - generate software requirements specification (SRS) - generate User Stories - generate Use Cases
The goal is to be able to summarise long audios, to help people who don't have time to catchup on lecture recordings to understand what they might have missed. In the future lecture notes with key points and explainable diagrams will be created to even better the content quality that can be derived from these recordings. It would also be useful for those including myself who are too shy to ask question during lectures but also need help in understanding certain parts of the lecture in a simpler and well broken down method , most likely better than what the lecturer would have explained. This will be good to implement as part of a university's app for use by the students
Medium sized companies that are GCP customers have a problem. They hear about the great power of OpenAI and want to take advantage to leverage the technology for their businesses. But at the same time, they have significant proprietary data that they can't hand to just anyone, and they are concerned about the use of this information in the hands of startups they've never heard from. Howe can we solve this problem? The new releases in Vertex mean an opportunity to re-connect the dots of modern generative AI technology with the information they have today without ever leaving Google. Our implementation connects audio import of meeting recordings using Google Cloud Run and Google Speech-To-Text using the latest-generation Chirp model. We then load that data into Google CLoud Storage and apply it to a Vertex matching engine via the Gecko embeddings model. The data is queryable using a langchain-enabled Chainlit instance again running on Cloud Run which leverages both the latest Palm 2 chat interface and vector search from Matching Engine. Now instead of using Pinecone, Ada and GPT4, one can use an all-google approach for compliance and safety.
StoryGen represents a groundbreaking initiative poised to revolutionize moral education and character development for children globally. Our mission is to promote global moral education by leveraging artificial intelligence to adapt ancient fables from diverse cultures. In our interconnected world, it is vital to instill strong moral values while embracing the diversity of global cultures. Traditional fables have long been revered for their wisdom. However, by expanding our repertoire to include fables from various ancient traditions, we have an opportunity to create a truly inclusive and impactful educational experience. Our goal is to adapt these fables using AI techniques, ensuring they resonate with children worldwide. Key Features: Cultural Adaptation: StoryGen employs AI technologies to adapt fables, transcending cultural boundaries. For example, Panchatantra fables can be reimagined with western characters, enabling children in Western countries to enjoy and appreciate Indian wisdom. Similarly, fables from Western cultures can be adapted to resonate with children in other regions. This approach promotes cultural exchange and understanding. Age-Appropriate Content: StoryGen dynamically tailors the complexity and vocabulary of the stories to suit the developmental stage of the target audience. Younger children receive fables with simpler language and themes, while older children engage with more nuanced and thought-provoking narratives. Ethical Lessons and Moral Values: StoryGen carefully selects fables that promote positive values, critical thinking, empathy, and character development. E.g. honesty through "The Boy Who Cried Wolf" and gratitude in "The Lion and the Mouse." These lessons are universally applicable and resonate with children from different cultural backgrounds. Language and Communication Skills: StoryGen enhances language and communication skills through engaging stories. Example Content: https://www.youtube.com/@ModernPanchatantra
Introducing Pdf2Bot, an innovative web application designed to revolutionize the way we interact with documents. With Pdf2Bot, users can effortlessly create a dynamic chatbot that seamlessly extracts information from uploaded PDF files and provides accurate answers to a wide range of queries. What sets Pdf2Bot apart is its unique ability to generate response At the core of Pdf2Bot lies the powerful text bison model from VertexAI, a cutting-edge language processing technology. This sophisticated model is specifically trained to understand and analyze textual content, enabling Pdf2Bot to comprehend the uploaded PDF documents with remarkable accuracy. Whether it's a research paper, a technical manual, or a legal document, Pdf2Bot's text bison model can handle diverse types of content, making it a versatile tool for various industries and purposes. Using Pdf2Bot is a breeze. Users simply upload their desired PDF file through the user-friendly web interface. The intelligent backend of Pdf2Bot swiftly processes the document, extracting and organizing the relevant information into a structured format. Once this initial step is complete, the magic begins. Pdf2Bot's chatbot functionality comes into play, as it leverages the processed content to provide intelligent responses to user queries. The chatbot is designed to understand natural language and can handle questions of varying complexity. Whether it's a specific fact, a concept explanation, or a request for further details, Pdf2Bot's chatbot can handle it all of them
BEG Digest stands at the intersection of innovation and efficiency, a cutting-edge AI-powered application built by the Boston Eating Group. The app transforms voice into text and then leverages advanced language model technology to craft summaries from the converted text. Designed for the modern learner, podcast enthusiast, or any individual looking to streamline their content absorption, BEG Digest turns lengthy lectures or podcast episodes into bite-sized overviews in mere minutes. Whether you're revising from class recordings or want a quick rundown of your latest podcast episode, our app makes the task a breeze. It's the perfect tool for optimizing your time and enhancing knowledge retention. Immerse yourself in the world of AI-driven learning, redefining the way we consume and understand information in the digital era. BEG Digest: Listen, Transcribe, Summarize, and Absorb.
Communication is a fundamental aspect of human interaction, but for children with autism, it can often be a challenge. ChatBot for Autism Kids is designed to enhance communication abilities in children with autism. The ChatBot offers three distinct modes tailored to address the diverse needs of children at different developmental stages. The first mode, augmentative and alternative communication (AAC), provides visual aids such as symbols, pictures, or icons to facilitate understanding and expression. This mode enables children to effectively convey their needs, wants, and thoughts visually, fostering independence and reducing frustration. The second mode is text communication, which allows children who are more comfortable with written language to engage in meaningful conversations. The ChatBot offers a user-friendly interface where children can type out their messages and receive responses, enabling them to express themselves through the written word. The third mode is speech communication, a groundbreaking feature that utilizes advanced speech recognition technology. Through this mode, children can use their own voice to communicate, with the ChatBot understanding and responding accordingly. This not only promotes speech development but also provides a sense of empowerment and self-confidence to children who may struggle with verbal communication. One of the key advantages of our ChatBot is its simplicity and ease of use. It can be easily programmed by speech and occupational therapists, as well as other key stakeholders involved in the child's development. This means that no specialized computer skills are required, ensuring that caregivers and educators can easily customize the ChatBot to meet the specific needs of each child.