5
3
Moldova, Republic of
1 year of experience
Hello there, I am Maxim, 18 years old, have much Figma and Hackathons experience!
Cleo is a versatile smart assistant that can receive both text and voice questions from website users. By employing natural language processing (NLP) and machine learning (ML) technologies, Cleo is able to understand and provide personalized responses based on users' specific requests, regardless of whether the question is asked through text or voice. In addition to offering instant support to users, Cleo can provide marketers with valuable insights into inquiries and sales analytics. Cleo can assist in identifying emerging trends, shifting sales towards popular products, and introducing new products and business strategies based on users' queries. Cleo's ability to handle both voice and text queries also allows it to help businesses save time and money by automating common inquiries and delivering personalized responses. This helps to reduce the workload on customer service and support teams, improve efficiency and productivity, and enhance customer satisfaction. Some examples of businesses that may benefit from Cleo include e-commerce companies, online retailers, travel agencies, healthcare providers, financial institutions, and educational institutions. These businesses often receive a large volume of inquiries and require efficient customer service and support processes. Moreover, businesses that rely on data analytics to drive decision-making processes, such as marketers and sales teams, can also benefit from Cleo's capabilities. Cleo can provide valuable insights into user queries and sales analytics, which can inform marketing and sales strategies and drive business growth. Overall, Cleo is a powerful tool that can significantly improve businesses' online presence, provide better support to customers, offer valuable insights, and save time and resources. Its ability to receive both voice and text questions from users makes it an even more adaptable and valuable asset for businesses.
Currently, the most popular corporate knowledge management system is Confluence by Alatasian. It is known for a lack of search capabilities and makes most corporate knowledge inaccessible, especially in fast-growing companies where regular structure and responsibilities change. Some independent vendors fill this gap by offering carefully tuned solar-based search engines for Confluence, but not real semantic search. Confluence is a proprietary cloud-based solution, and it would be difficult to MVP a search extension in a hackathon. The most advanced open-source alternative is wiki.js, which already supports external search engines. So the current goal is to implement an external search engine for wiki.js using Cohere's LLM-powered Multilingual Text Understanding model and Qdrant's vector search engine. At the second stage of the project (most likely outside the hackathon scope), we plan to add the capability to upload and index videos in our knowledge management system. Recordings of presentations and meetings are the richest source of knowledge, but they were left outside knowledge management due to technical difficulties. Simple transcription and semantic search of that content could significantly boost corporate knowledge accessibility.
Enhancing Readability with Whisper and ChatGPT Whisper is an incredibly powerful transcription model, which we utilized to convert video content into text format. However, the resulting transcript was a dense wall of text, making it difficult to digest. To improve readability, we employed ChatGPT to introduce structure, including paragraph breaks and headers. The text is now significantly more reader-friendly. Integrating Slides and Transcripts for Seamless Presentations During presentations, speakers often refer to slides, which are absent from the transcript. To address this issue, we have synchronized the text with the video in our wiki. This feature allows users to click on the text and instantly view the corresponding slide. Alternatively, users can play the video without audio and follow along with the highlighted text, creating a more integrated and accessible experience. And everything is backed by our semantic search we introduced at the previous hackathon