GPT-3

GPT-3 stands for Generative Pre-trained Transformer 3 and it is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series created by OpenAI. GPT-3 is currently in open beta.

GPT-3 is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT series created by OpenAI. GPT-3 is currently in open beta and has access to over 175 billion parameters, making it the largest language model ever built.

GPT-3 General Information

Discover GPT-3 to help you get started

GPT-3 Boilerplates

Boilerplates to help you get started

GPT-3 Tutorials

Explore the coding tutorials and how-to guides available on our website to help you get started and learn to build with GPT-3 artificial intelligence technology

GPT-3 Hackathon projects

Solutions built with GPT-3 that have been created during our hackathons by the members of our community

MindMate

MindMate

During the hackathon, we fine-tuned GPT-3 and built a self-analysis tool that helps one objectively assess their problem and develop new ideas for solving it. It can be used by people who can't access mental health care because of high prices and stigma. It is based on CBT and should be highly effective in the following cases: 1. A person has a problem and doesn't know how to solve it. For example, "I can't keep up with deadlines," or "My parents are overprotective." 2. A person can't make a decision. "Should I move?", "Should I accept an offer from a new company?" etc. 3. A person can't sort out their thoughts. "I can't understand why I'm so uncomfortable being a dad," "Why have I become so irritable?" etc. 4. A person wants to improve their relationship. "I'm so jealous," "We fight all the time," "I'm not happy with my wife. I cheated, and I feel guilty". In therapy, people who are objective about their situation and able to set specific goals tend to achieve better results. This tool does exactly that. A typical session consists of three parts: 1. Analysis. This part includes questions that make the person analyze various aspects of the situation and draw an objective picture. The essence of this part is the transition from an emotional to a rational perception of reality. 2. Empathy. It consists of a comprehensive generalizing statement aimed at supporting the client emotionally. 3. Decision. It consists of questions that allow the person to analyze the availability of resources and ways to solve the problem. Questions force the person to move from emotions to concrete steps toward the goal.

mEYE Buddy App

mEYE Buddy App

mEYE Buddy app is an application whose main role is to play part as your very own PERSONAL assistant who will make the world a little bit more accessible for anyone who is in any way visually impaired and requires assistance. The BEST SIDES OF THIS APP are that IT is highly affordable, with a premium version and is based on a loyalty program (business model). With mEYE buddy, not only would the life of blind people be much easier, but they could have the privilege of INDEPENDENCE - they would not need to rely on their caretaker or service dog to help them withq everyday chores.The state of the art tool for assistance to visually impaired persons! This state of the art AI technology will assist you in your everyday life. How app works: For first login you would have to register, and you'd do that using your fingerprint. Everytime the app is opened, the voice will tell you where everything is located on the screen. The design was made simple and the buttons were made big for easy access. The user can always press the icon in the middle of the screen for a reminder of the locations of the buttons. The voice assistant is the main function of the app. It will activate the AI, which will use a connected camera to describe the surroundings and warn the user of hazards. The key places tab will take the user to a tab where they can activate a guide to registered locations that are important to the user. (workplace, supermarket, hospital, home, coffee shop, etc.) They can also register new locations and remove old ones. The hazards that were noted by the ai could also be registered here. The devices tab is simple. It is used to connect the device to an external camera, preferably one on the glasses of the user or attached to their body. The user can also connect to their smart watch for easy access. The settings tab is pretty self-explanatory. However, besides the settings, it will also contain emergency information about the user, in case they need help from someone. The “mEYE Buddy” app will be connected to a camera on the glasses or on your body, and will describe everything important going in front of you - every peculiar movement that triggers its sensors or warn you against any potential hazards that may come in your way. The app can also be asked to describe something specific in more detail. Our Buddy also stores any hazard or newly recognized item in its already vast and enriched database. The app can also be told where certain points of interest are located (workplace, supermarket, hospital, home, coffee shop, etc.) for easier access to it later. The app comes with a simple UI with big buttons for input, and can be instructed through the AI voice as well. Also comes with a 3x3 keyboard for simpler accessibility.

Phoenix Whisper

Phoenix Whisper

According to research made by J. Birulés-Muntané1 and S. Soto-Faraco (10.1371/journal.pone.0158409), watching movies with subtitles can help us learn a new language more effectively. However, the traditional way of showing subtitles in YouTube or Netflix does not provide us the best way to check the meaning of new vocabulary nor understand complex slang and abbreviation. Therefore, we found out that if we display dual subtitles (the original subtitle of the video and the translated one), the learning curve immediately improves. In research conducted in Japan, the authors concluded that the participants who viewed the episode with dual subtitles did significantly better (http://callej.org/journal/22-3/Dizon-Thanyawatpokin2021.pdf). After understanding both the problem and the solution, we decided to create a platform for learning new languages with dual active transcripts. When you enter a YouTube URL or upload an MP4 file in our web application, the app will produce a web page where you can view the video and have a transcript running next to it in two different languages. We have accomplished this goal and successfully integrated OpenAI Whisper, GPT and Facebook's language model for the backend of the app. At first, we use Streamlit for the app, but it does not provide a transcript that automatically move with the audio timeline, also Streamlit does not give us the ability to design the user interface, so we create our own full stack application using Bootstrap, Flask, HTML, CSS and Javascript. Our business model is subscription-based and/or one-time purchase based on the usage. Our app isn’t just for language learners. It can also be used for writers, singers, YouTubers, or anyone who would like to make their content reach out to more people by adding different languages to their videos/audios. Due to the limitation of free hosting plan, we could not deploy the app on cloud for now but we have a simple website that you can have a quick look at what we are creating (https://phoenixwhisper.onrender.com/success/BzKtI9OfEpk/en).

Join one of our AI hackathons to build modern artificial intelligence together with talented members of our lablab community