3
2
Austria
1 year of experience
Currently, the most popular corporate knowledge management system is Confluence by Alatasian. It is known for a lack of search capabilities and makes most corporate knowledge inaccessible, especially in fast-growing companies where regular structure and responsibilities change. Some independent vendors fill this gap by offering carefully tuned solar-based search engines for Confluence, but not real semantic search. Confluence is a proprietary cloud-based solution, and it would be difficult to MVP a search extension in a hackathon. The most advanced open-source alternative is wiki.js, which already supports external search engines. So the current goal is to implement an external search engine for wiki.js using Cohere's LLM-powered Multilingual Text Understanding model and Qdrant's vector search engine. At the second stage of the project (most likely outside the hackathon scope), we plan to add the capability to upload and index videos in our knowledge management system. Recordings of presentations and meetings are the richest source of knowledge, but they were left outside knowledge management due to technical difficulties. Simple transcription and semantic search of that content could significantly boost corporate knowledge accessibility.
Enhancing Readability with Whisper and ChatGPT Whisper is an incredibly powerful transcription model, which we utilized to convert video content into text format. However, the resulting transcript was a dense wall of text, making it difficult to digest. To improve readability, we employed ChatGPT to introduce structure, including paragraph breaks and headers. The text is now significantly more reader-friendly. Integrating Slides and Transcripts for Seamless Presentations During presentations, speakers often refer to slides, which are absent from the transcript. To address this issue, we have synchronized the text with the video in our wiki. This feature allows users to click on the text and instantly view the corresponding slide. Alternatively, users can play the video without audio and follow along with the highlighted text, creating a more integrated and accessible experience. And everything is backed by our semantic search we introduced at the previous hackathon