While coding, programmers rely heavily on documentation, and the process of switching windows every time you search for something could be obnoxious at times, especially if the device has only one display. We thought about a solution of speeding the process of developing by creating a code helper VSCode extension. Llama2 GPT CodePilot is aiming at helping software developers in building code or debugging their software by prompting the gpt making it coding convenient for developers with only one display. It uses a large language model, CodeLlama-7B-Instruct-GPTQ, takes input from the user, and generates a relevant response based on the text given. It is published on the VSCode extension marketplace under the name of "Llama2-GPT-CodePilot" and it is written in TypeScript
A web app with built with React and Next.js using GCP for hosting and Firebase for storage. Web app uses a Large Language Model as main feature: Google Vertex AI PaLM2 on text-bison model. For fine tuning we plan to use CUAD dataset and the model was modified with LangChain, for enhancing it's summarization abilities to state of art. The app is aimed to summarize and let the user be aware of the terms and conditions of the companies he signed up for. The model be evaluated with the help of TruLens and The final features will be: Summarization of the company's ToS, Storing previous answers and use them to plot and visualize the the terms you've agreed on, Submit your own terms and conditions which will be summarized and sent into the pipeline for context
Rarely people read the whole data corpus of terms of service when they sign up for companies and we came with a solution: The primary objective of Term Aware Guard is to simplify the readability of Terms and Conditions, providing a summarized version and ensuring that users are well-informed about data privacy beforehand. It is a web app with built with Next.js using GCP for hosting and Firebase for storing the companies the user has signed up for. The web app uses Python environment REST API hosted on Digital Ocean for the backend. We use Google Vertex AI PaLM2 on text-bison002 model for inference, a large language model with impressive capabilities. Terms and conditions of companies change constantly, and fine tuning the model everytime that happens could be very costly and inefficient, for that we found a solution. We use Retrieval augmented generation(RAG) to enhance the model's summarization capabilities and to get the context of the latest updated version ToS of companies. The model is evaluated with the help of TruLens to measure its quality with feedback and metrics. We used Apify for gathering the data by web scrapping and Pinecone as a vector store where RAG gets context from.