Data Engineer | Python Aficionado | Life-Long Learner | Current goal: grokking Django/WebDev/LLMs I have prior experience developing LLM-based applications, also familiar with RAG and MRKL architectures. Tech-Stack for LLM projects: OpenAI/Azure OpenAI Services/Cohere as LLM provider, LangChain (LCEL), Chroma/Weaviate for VectorDB & retrieval, Streamlit for Prototype UI, Django (supplemented with HTMX, TailwindCSS & AlpineJS) to further flesh out a prototype (user management, persistent state, settings, integration with other relevant information..) once the initial idea is validated.
While coding, programmers rely heavily on documentation, and the process of switching windows every time you search for something could be obnoxious at times, especially if the device has only one display. We thought about a solution of speeding the process of developing by creating a code helper VSCode extension. Llama2 GPT CodePilot is aiming at helping software developers in building code or debugging their software by prompting the gpt making it coding convenient for developers with only one display. It uses a large language model, CodeLlama-7B-Instruct-GPTQ, takes input from the user, and generates a relevant response based on the text given. It is published on the VSCode extension marketplace under the name of "Llama2-GPT-CodePilot" and it is written in TypeScript
A web app with built with React and Next.js using GCP for hosting and Firebase for storage. Web app uses a Large Language Model as main feature: Google Vertex AI PaLM2 on text-bison model. For fine tuning we plan to use CUAD dataset and the model was modified with LangChain, for enhancing it's summarization abilities to state of art. The app is aimed to summarize and let the user be aware of the terms and conditions of the companies he signed up for. The model be evaluated with the help of TruLens and The final features will be: Summarization of the company's ToS, Storing previous answers and use them to plot and visualize the the terms you've agreed on, Submit your own terms and conditions which will be summarized and sent into the pipeline for context
Rarely people read the whole data corpus of terms of service when they sign up for companies and we came with a solution: The primary objective of Term Aware Guard is to simplify the readability of Terms and Conditions, providing a summarized version and ensuring that users are well-informed about data privacy beforehand. It is a web app with built with Next.js using GCP for hosting and Firebase for storing the companies the user has signed up for. The web app uses Python environment REST API hosted on Digital Ocean for the backend. We use Google Vertex AI PaLM2 on text-bison002 model for inference, a large language model with impressive capabilities. Terms and conditions of companies change constantly, and fine tuning the model everytime that happens could be very costly and inefficient, for that we found a solution. We use Retrieval augmented generation(RAG) to enhance the model's summarization capabilities and to get the context of the latest updated version ToS of companies. The model is evaluated with the help of TruLens to measure its quality with feedback and metrics. We used Apify for gathering the data by web scrapping and Pinecone as a vector store where RAG gets context from.
🚀💻 Be the first to build an AI App on Google's models! Hackathon on July 7-10. 🔬🌐 Try new Vertex AI features from Google Cloud Platform. 🤝🌍 Learn from AI leaders and connect with like-minded people. 🛠️📱 Build apps with the world's best AI tools! 💡🌍 Solve real-world problems with Generative AI models
⌚ 7-days Hackathon 👥 Create or find your team on the platform 💡 Get educational material for all the levels of experience 🚀 Use beat AI tech from Anthropic, OpenAI, Stability AI, ElevenLabs and more - to build your own gaming project