Cohere Cohere Applications

Browse applications built on Cohere Cohere technology. Explore PoC and MVP applications created by our community and discover innovative use cases for Cohere Cohere technology.

Project Eval

Eval aims to address the problem of subjectively evaluating test answers. Traditionally, this task has been carried out manually by human graders, which can be time-consuming and prone to bias. To address this issue, the project utilizes Cohere powered APIs to automate the evaluation process. The use of Cohere APIs allows for the integration of advanced natural language processing techniques, enabling the system to accurately understand and analyze the content of test answers. The custom model built upon these APIs then scores the answers based on suitable metrics, which can be tailored to the specific requirements of the test or assessment. One potential application of this technology is in the field of education, where it could be used to grade assignments or exams in a more efficient and unbiased manner. It could also be utilized in professional settings for evaluating job applications or performance evaluations. In addition to increasing efficiency and reducing bias, the use of automated evaluation techniques has the potential to provide more consistent and reliable scoring. This can help to ensure that test-takers receive fair and accurate assessments of their knowledge and skills. The model for the same was evaluated based on 4 major metrics: - Semantic Search: this is the primary scoring strategy of Eval. It is used to semantically understand the answer given and evaluate based on content rather than simply scoring based on textual similarities. Cohere Embed was used to generate embeddings for 5 suggested answers for the question and the answer to be checked. Then we find the distance from the nearest neighbor out of the 5 suggestions and the answer. This distance is used to grade the answer. - Duplication Check: partially correct answers with duplication of text tended to get higher similarity scores compared to the ones without duplication. To stop students from using this exploit to gain extra marks, a duplication checker was implemented based on Jaccard-Similarity between sentences within the answer. - Grammar Check: this strategy aims to check the grammar of the answer and assign a score based on the number of grammatical errors. We used Cohere Generate endpoint to generate a grammatically correct version of the answer, then check for cosine similarity of the generated version with original version to check if the original version was grammatically correct. - Toxicity Check: this aims to detect for toxic content in the answer and penalize an answer if it is toxic. We trained a custom classification model on Cohere using the Social Media Toxicity Dataset by SurgeAI which gave a 98% precision on the test split. We also implemented a Custom Checks which allows users to give different weights to each of the three different metrics based on how important they are for the evaluation of the answer. This allows for a more personalized evaluation of the answer. We built our custom model into a Flask-based REST API server deployed on Replit to streamline usage and allow people to access the full-functionality of the model. We also built a highly interactive UI that allows for users to easily interact with the API and evaluate their answers as well as submit questions.

chAI
replit
application badge
Cohere

I Rene

I-Rene Provides CBT for the lovely user using Cohere Conversant AI tool. The NFT is minted for free by the lovely user to track the CBT sessions and be proud of their healing process. It is a free, open-source, specific & effective mental health therapy needed by everyone, anywhere at anytime. The mental health AI chatbot will be developed as a standalone application that users can download and install on their mobile devices or access through a web-based platform. The chatbot will collect and use user feedback to improve its performance and effectiveness. This will involve monitoring user interactions and responses, and using machine learning algorithms to continuously adapt and improve the chatbot's responses and support. The mental health AI chatbot will use sentiment analysis to understand the emotional state of users and react accordingly. For example, if a user is feeling sad or anxious, the chatbot can provide appropriate support and resources to help the user manage their emotions and feelings. The mental health AI chatbot will use entity extraction to provide context-dependent answers and support, rather than just generic responses. This will involve analyzing the user's messages and extracting relevant entities and information, such as the user's goals, concerns, and challenges. The chatbot can then use this information to provide personalized and tailored support. The mental health AI chatbot will be integrated with a decentralized autonomous organization (DAO) and the Metaverse, which is a virtual shared space for communities and organizations. This will enable the chatbot to access and use decentralized resources and data to provide more accurate, relevant, and engaging support to users.

Mental Health AI
medal
Streamlit
application badge
Cohere

AI Chatbot

Chatbots in the healthcare field are providing patient assistance and care. AI-powered medical assistant can book appointment, monitor a patient health status and perform other time-intensive responsibilities such as inventory, billing and claims management . There are three key limitations: 1)Explainability 2)Datarequirement 3)Transferability STATEMENT: To be able to enter a prescription with structured data in a software system, within a comparable time to hand written prescription. IDEA: 1) Automation of handwritten and digital prescriptions to reduce entry time. 2) Improve the effectiveness of customer service terms. 3) Reduce the potential for human error. 4)Collect candid and meaningful customer feedback. 5) Guide customers along the path to purchase. 6) Build stronger customer relationships Chatbots reside in the most commonly used apps in the form of assistants on various websites where they can converse with the users. With advanced machine learning algorithms and natural language processing methods enabled , these chatbots can create maps linking symptoms and diseases. Chatbots in the industry for medical care ,ask some standard questions and help create a profile based on age ,sex, and medical history . They can record the users history and analyze symptoms based on users inputs. They can also use image and voice processing to record and match symptoms against the database using the gathered information it automatically print handwritten and digital prescriptions. Natural language processing Deep learning Context aware processing Intelligent robots Neural networks Fuzzy logic Support vector machines Genetic algorithms Hybrid system

AiDemanica
Cohere

mEYE Buddy App

mEYE Buddy app is an application whose main role is to play part as your very own PERSONAL assistant who will make the world a little bit more accessible for anyone who is in any way visually impaired and requires assistance. The BEST SIDES OF THIS APP are that IT is highly affordable, with a premium version and is based on a loyalty program (business model). With mEYE buddy, not only would the life of blind people be much easier, but they could have the privilege of INDEPENDENCE - they would not need to rely on their caretaker or service dog to help them withq everyday chores.The state of the art tool for assistance to visually impaired persons! This state of the art AI technology will assist you in your everyday life. How app works: For first login you would have to register, and you'd do that using your fingerprint. Everytime the app is opened, the voice will tell you where everything is located on the screen. The design was made simple and the buttons were made big for easy access. The user can always press the icon in the middle of the screen for a reminder of the locations of the buttons. The voice assistant is the main function of the app. It will activate the AI, which will use a connected camera to describe the surroundings and warn the user of hazards. The key places tab will take the user to a tab where they can activate a guide to registered locations that are important to the user. (workplace, supermarket, hospital, home, coffee shop, etc.) They can also register new locations and remove old ones. The hazards that were noted by the ai could also be registered here. The devices tab is simple. It is used to connect the device to an external camera, preferably one on the glasses of the user or attached to their body. The user can also connect to their smart watch for easy access. The settings tab is pretty self-explanatory. However, besides the settings, it will also contain emergency information about the user, in case they need help from someone. The “mEYE Buddy” app will be connected to a camera on the glasses or on your body, and will describe everything important going in front of you - every peculiar movement that triggers its sensors or warn you against any potential hazards that may come in your way. The app can also be asked to describe something specific in more detail. Our Buddy also stores any hazard or newly recognized item in its already vast and enriched database. The app can also be told where certain points of interest are located (workplace, supermarket, hospital, home, coffee shop, etc.) for easier access to it later. The app comes with a simple UI with big buttons for input, and can be instructed through the AI voice as well. Also comes with a 3x3 keyboard for simpler accessibility.

Aurora
GPT-3Cohere

BridgeDoc

https://krusnabalar-bridgedoc-frontendsrcmydoc-jwe73t.streamlitapp.com/ INSPIRATION: When was the last time you had an uncomfortable sensation that you didn’t know how to describe? You look it up on google, WebMD and ask onreddit, but find yourself just as uninformed and even more stressed than before. You end up calling a clinic or hospital to speak with a doctor, and find yourself having to wait a week until the next available appointment. The general public is not trained to be aware of and describe the symptoms they might be going through. The way we describe our sensations can vary enormously, often we use idioms and other figures of speech. The stress and struggle of being unable to understand our body’s pain can be frustrating. Even in speaking with doctors, there’s often misunderstandings, and a lot of back and forth, until the doctor can finally understand the patient’s symptoms. In the field of medicine and healthcare, that kind of subjectivity and unpredictability can be dangerous, inefficient, and costly. That got our team to wonder: What if there’s a way to effectively predict a patient’s symptoms based on their own description in a fraction of the time at no cost to the patient? BMJ Journal published a study which performed research on clinical text to extract mental health symptoms and using a classification NLP model, citing the automatability of the symptom detection process as being a credible way to approach this issue. SOLUTION: Introducting BridgeDoc, the tool that doctors and the general public can use to understand and identify their symptoms and diseases. BridgeDoc will use classification tools provided by co:here to detect and identify the specific symptoms, for the knowledge of the doctor, and possible disease diagnoses, for the knowledge of the doctor and the patient. It will allow an ease of communication between a doctor, clinic, or hospital with the patients by using a model trained with colloquial descriptions of symptoms to identify the likelihood of the patient’s symptoms. COMPETITIVE ADVANTAGE: Companies like WebMD, Mayo Clinic, DearDoc, Mercury Healthcare all lack a way to enhance user inquiry and streamline the communication between doctors and patients. BridgeDoc equips users and medical businesses to prevent the struggle and misunderstandings involved in translating a patient’s description of their issue and the doctor’s knowledge of what exactly those symptoms are, helping them efficiently pin point the most likely solutions. This has not been used in a professional medical sense for patients and can help edge over competitors in a significant way with a high quality symptom prediction model. REVENUE AND EXPENSES: There are two B2B (doctors, hospitals) and B2C (online website clients) solution that BridgeDoc provides. B2B solutions will be provided initially per user at a contract price based on the user report and needs of the doctor/hospital. B2C users will have free access. Expenses will be website hosting when moved to a different website that is more customized, as well as to access data that is more reliable by finding methods to get a secure access to it. Research expenses for improving model prediction will also be there. KEY METRICS: We will track the following key metrics Growth: number of doctor contracts acquired per month, number of users accessing website Engagement: use tools like Hotjar to track website interaction (user remains anonymous), track number of searches made per user Marketing: to measure success for our marketing efforts, we will evaluate cost per acquisition, cost per clicks, and understanding the trends in impression to generate better marketing assets iteratively. We will also research marketing tacics of companies like Zocdoc who created an industry tool used by doctors. Product: we will track and receive product feedback dynamically using tools like hotjar to understand user engagement with the product (what parts of the website are being used, how long they spend on the website, etc. HOW WE BUILT IT: We built everything with python. Using jupyter notebooks we tested out co:here’s endpoints, integrated them into the boilerplate provided by LabLab and co:here. The app is deployed using streamlit. Data was collected from various tools such as reddit and google search scrapers, forums. To reach co:here’s requirement for a minimum of 250 examples to train the classify model, due to scarcity of data, we used co:here’s generate tool to build more examples by fine-tuning the model and using specific input phrases that generate reliable results. WHAT'S NEXT FOR BRIDGEDOC We would love to see BridgeDoc to be a standalone tool that can be integrated with online tools for clinics and doctors with private practices, as well as hospital which often deal with issues of patient capacity limits, to automate the report creation by listing possible symptoms and diagnoses automatically. This would require adding a co:here-trained chat-bot that can extract the information from the user in a friendly, secure, and reliable way, similar to how a doctor might on the phone, and produce a report based on the user’s profile (gender, age, previous conditions, etc.) to improve the symptom and disease prediction. Additionally, we leveraged insights from our mentor Ervin, and the “How to get funding from your startup” workshop by Pawel Czech and Mathias Asberg to understand business needs and identify the gaps in what’s currently being offered in our product. It’s important to focus our efforts to specific medical domains to improve accuracy, user retention, and help market and get clients. And the user base can expand by changing the architecture of our model to have a bigger tool that operates like the following: Take input from user and classify what medical domain the query is in. Based on the prediction confidence levels, use the top medical domains and send the user input to specialized classify models trained for the specific domain. Get top predictions from those results of the symptoms as well as the disease. This does not narrow down the user base, and more importantly, it provides improved symptom prediction and more reliable results.

Voyagers
Cohere