Discover AI Applications

Browse applications built on modern technologies. Explore PoC and MVP applications created by our community and discover innovative use cases for modern technologies.

Prompt Consultant

In the rapidly progressing world of large language models (LLMs) like GPT-4, the art of crafting effective prompts is crucial for harnessing their potential. However, the dynamic nature of LLM research presents a challenge: keeping up with the continuous influx of new prompting techniques. This is where our project, the "Prompt Consultant", steps in. The Prompt Consultant aims to guide users in generating more effective prompts, not by having them chase the latest research, but by leveraging the power of the LLMs themselves. We exploit the LLM’s capacity to assimilate the best prompting resources and provide insights for improving prompts. The challenge is that LLMs, due to their static training data, are not aware of the latest prompting tricks. Our solution is to use in-context learning, incorporating the most recent prompting resources directly into the prompt. Anthropic's long-context API is a critical component of this endeavor. It's impractical to train a new model every time a new prompting method emerges, and the versatility of user queries makes common vector search methods insufficient. The long-context API allows us to include extensive relevant prompting information directly in the context. Our proof-of-concept demo uses the latest resources from learnprompting.org, embedding them in the model’s context. Users can then consult our bot, implemented on Anthropic’s Claude-v1.3-100k model, to enhance their prompts. Our preliminary results show promise, indicating LLMs' potential to stay in step with rapid advancements in their field. In essence, the Prompt Consultant bridges the gap between the rapid progression of LLM research and practical, effective usage of these models. By leveraging the LLMs themselves, we aim to make these technologies more accessible, democratizing the benefits of AI research. Our project foresees a future where anyone, regardless of their expertise, can generate high-quality outputs from these models through optimized prompting.

long long int
replit
application badge
OpenAI GPT-4Anthropic Anthropic Claude

Maverick AI

Maverick REACT offers artificial intelligence integration for emergency situations. Our service uses AI with the necessary event information provided by government officials and acts as an assistant to provide key protocols and information to citizens. The AI service is accessed via SMS or web portal, offering a solution without internet. How does our service work? When an emergency situation occurs, such as a flood, fire or earthquake, our service sends an SMS message or makes a voice call to numbers registered in a database or the citizen can contact a number provided by the authorities. The message or call contains information about the type and severity of the emergency, preventive measures that should be taken and resources available in the area. The user can respond to the message or call with specific questions about their personal situation or request additional help. Our service uses AI algorithms to process responses and offer personalized and updated advice. REACT has several advantages over traditional emergency alert and response systems. Firstly, it does not depend on the internet, which means it can function even when there are power outages or problems with mobile networks. Secondly, REACT service is interactive and adaptable to the individual needs of each user. Thirdly, it uses reliable and verified sources of information provided by the government or other authorized organizations. And finally REACT is fast and efficient in sending and receiving large-scale messages or calls. Our goal is to contribute to creating a safer and more resilient world in the face of emergency situations through innovative and intelligent use of technology. We believe that our service can save lives and reduce suffering caused by disasters. If you want to know more about our service or how to register for it, contact us. We are Maverick AI.

MaverickAI
replit
application badge
CohereQdrant

scripttwolife

Scrip2life is a new tool that utilises Co:here’s Large language model to save time by breaking down movie scripts and generating summaries, character traits, and backstories for actors to use as inspiration while preparing for their upcoming auditions. Our goal is to improve their odds of finding and landing suitable and inspiring roles to portray stories for the viewers. We achieve this by streamlining the script comprehension using ai tools, so that actors focus on always standing out with the depth of understanding and world-building they manage to portray through the limited amount of preparation time before an audition or role. Our solution is validated by Deepmind’s discussions with industry Professionals evaluating their co-writing system, Dramatron. The writers expressed they would rather use the system for “world building,” for exploring alternative stories by changing characters or plot elements, and for creative idea generation than to write a full play. We focused on building scrip2life based on this market validation with the additional advantage that it is accessible to everyone, not just theatre professionals. Our MVP targets budding ad working actors who are looking for ways to save time while applying to hundreds of casting calls throughout the year. We additionally provide inspiration to be immersed into their characters and script for upcoming auditions and roles. Scrip2life was written collaboratively in Replit IDE. The frontend is written in html, css and js. We used flask to bring the code to life, enabling the calls to the cohere api. We prioritised the co:here generate api due to the creative nature of Scrip2life.

we-r-artiste
replit
application badge

Project Eval

Eval aims to address the problem of subjectively evaluating test answers. Traditionally, this task has been carried out manually by human graders, which can be time-consuming and prone to bias. To address this issue, the project utilizes Cohere powered APIs to automate the evaluation process. The use of Cohere APIs allows for the integration of advanced natural language processing techniques, enabling the system to accurately understand and analyze the content of test answers. The custom model built upon these APIs then scores the answers based on suitable metrics, which can be tailored to the specific requirements of the test or assessment. One potential application of this technology is in the field of education, where it could be used to grade assignments or exams in a more efficient and unbiased manner. It could also be utilized in professional settings for evaluating job applications or performance evaluations. In addition to increasing efficiency and reducing bias, the use of automated evaluation techniques has the potential to provide more consistent and reliable scoring. This can help to ensure that test-takers receive fair and accurate assessments of their knowledge and skills. The model for the same was evaluated based on 4 major metrics: - Semantic Search: this is the primary scoring strategy of Eval. It is used to semantically understand the answer given and evaluate based on content rather than simply scoring based on textual similarities. Cohere Embed was used to generate embeddings for 5 suggested answers for the question and the answer to be checked. Then we find the distance from the nearest neighbor out of the 5 suggestions and the answer. This distance is used to grade the answer. - Duplication Check: partially correct answers with duplication of text tended to get higher similarity scores compared to the ones without duplication. To stop students from using this exploit to gain extra marks, a duplication checker was implemented based on Jaccard-Similarity between sentences within the answer. - Grammar Check: this strategy aims to check the grammar of the answer and assign a score based on the number of grammatical errors. We used Cohere Generate endpoint to generate a grammatically correct version of the answer, then check for cosine similarity of the generated version with original version to check if the original version was grammatically correct. - Toxicity Check: this aims to detect for toxic content in the answer and penalize an answer if it is toxic. We trained a custom classification model on Cohere using the Social Media Toxicity Dataset by SurgeAI which gave a 98% precision on the test split. We also implemented a Custom Checks which allows users to give different weights to each of the three different metrics based on how important they are for the evaluation of the answer. This allows for a more personalized evaluation of the answer. We built our custom model into a Flask-based REST API server deployed on Replit to streamline usage and allow people to access the full-functionality of the model. We also built a highly interactive UI that allows for users to easily interact with the API and evaluate their answers as well as submit questions.

chAI
replit
application badge
Cohere

Phoenix Whisper

According to research made by J. Birulés-Muntané1 and S. Soto-Faraco (10.1371/journal.pone.0158409), watching movies with subtitles can help us learn a new language more effectively. However, the traditional way of showing subtitles in YouTube or Netflix does not provide us the best way to check the meaning of new vocabulary nor understand complex slang and abbreviation. Therefore, we found out that if we display dual subtitles (the original subtitle of the video and the translated one), the learning curve immediately improves. In research conducted in Japan, the authors concluded that the participants who viewed the episode with dual subtitles did significantly better (http://callej.org/journal/22-3/Dizon-Thanyawatpokin2021.pdf). After understanding both the problem and the solution, we decided to create a platform for learning new languages with dual active transcripts. When you enter a YouTube URL or upload an MP4 file in our web application, the app will produce a web page where you can view the video and have a transcript running next to it in two different languages. We have accomplished this goal and successfully integrated OpenAI Whisper, GPT and Facebook's language model for the backend of the app. At first, we use Streamlit for the app, but it does not provide a transcript that automatically move with the audio timeline, also Streamlit does not give us the ability to design the user interface, so we create our own full stack application using Bootstrap, Flask, HTML, CSS and Javascript. Our business model is subscription-based and/or one-time purchase based on the usage. Our app isn’t just for language learners. It can also be used for writers, singers, YouTubers, or anyone who would like to make their content reach out to more people by adding different languages to their videos/audios. Due to the limitation of free hosting plan, we could not deploy the app on cloud for now but we have a simple website that you can have a quick look at what we are creating (https://phoenixwhisper.onrender.com/success/BzKtI9OfEpk/en).

Phoenix
replit
application badge
OpenAI GPT3OpenAI CodexOpenAI Whisper