Convogpt is an exciting new platform that combines cutting-edge technology with the latest developments in pedagogical theory. Built around the powerful GPT-3 and DALL-E 2 engines, it offers a dynamic and engaging learning experience that is tailored to the needs of each individual user. At the heart of Convogpt is a set of personalized scenarios that allow users to practice a range of essential interpersonal skills. Whether you are looking to improve your negotiation techniques, refine your project management abilities, or simply become a more effective team player, Convogpt has something to offer. Through its innovative use of gamification, Convogpt makes learning both fun and accessible. By challenging you to complete increasingly complex tasks and rewarding you for your progress, it keeps you engaged and motivated throughout your learning journey. So if you are looking for a new and exciting way to improve your interpersonal skills and advance your career, look no further than Convogpt. With its cutting-edge technology, expertly designed scenarios, and engaging gamification features, it is the perfect platform for anyone looking to take their skills to the next level.
CoPilot-J is an open-source GitHub Copilot-X an alternative for VSCode. It can chat with code, generates, explains, and refactors code using LocalAI, and is compatible with any open-source model (ggml compatible). for the implementation We have 2 main components, the first one is VS code plugin that does all the interaction, it is able to chat with models, send code to the conversation, or detect code snippets from the conversation. the second part is go-skynet/LocalAI which does all the magic of serving AI models, in this demo, we’re using wizard-lm inference to demo the plugin, but you can use any ggml model to use it. the repo is up here: https://github.com/badgooooor/localai-vscode-plugin.git PS. Initially we thought we'd use GPT4ALL-J hence the name Copilot-J 😂
Manual call center communication is time-consuming, repetitive, and costly. By implementing an AI-driven healthcare call center like HeyDoctor!, we can improve the patient experience, reallocate staff resources, and streamline financial resources. For the submission, we have categorized our project into two main groups: the input side and the output side. On the input side, we utilized the OpenAI Whisper 2 API to convert speech to text. The text generated from this process was then sent to our backend service to create a response. On the output side, we used the OpenAI GPT-3.5-turbo API as the reasoning engine and powered assistant. To achieve this, we took the user's dialog obtained from the Whisper API and used it as input for the GPT-3.5-turbo API to generate responses. These responses were then used with the elevenlabs API to produce a realistic voice. For the frontend, we implemented Svelte, and for the backend, we used FastAPI. Both of these services were deployed using Vercel.