We first tried to push the dataset to the API and train the model through Python script but couldn't find any documentations on that. So trained on web platform and also provided dataset as prompts.
AtYou, it is mostly created to extract transcription from youtube and summarize it. The summarization will help people to know more about the video in short span or time or say they dont have to spend their time watching videos. It saves time, helps to know main points , also help in SEO to optimize recommendation by modelling main points. The app will also help people with hearing loss and non-native english speakers.
We made an app to search movies. We have used cohere and pinecone. Pinecone is being used for vector database and cohere is for embeddings. Our app gets movies based on user inputted query. So if a user search "Alien invasion movie", the app outputs "Edge of tomorrow, etc". It is mostly google like search but for movies and also we are using NLP (cohere-large) model.