privateGPT AI technology page Top Builders

Explore the top contributors showcasing the highest number of privateGPT AI technology page app submissions within our community.

PrivateGPT

PrivateGPT is a tool that enables you to ask questions to your documents without an internet connection, using the power of Language Models (LLMs). It is 100% private, and no data leaves your execution environment at any point. You can ingest documents and ask questions without an internet connection!

PrivateGPT is built with LangChain, GPT4All, LlamaCpp, Chroma, and SentenceTransformers.

demo

Setup and Usage

  1. Install all required packages by running pip3 install -r requirements.txt.
  2. Download an LLM model (e.g., ggml-gpt4all-j-v1.3-groovy.bin) and place it in a directory of your choice.
  3. Rename example.env to .env and edit the variables according to your setup.
  4. Run python ingest.py to ingest your documents.
  5. Run python privateGPT.py to ask questions to your documents locally.

Supported Document Formats

PrivateGPT supports the following document formats:

  • .csv: CSV
  • .docx: Word Document
  • .doc: Word Document
  • .enex: EverNote
  • .eml: Email
  • .epub: EPub
  • .html: HTML File
  • .md: Markdown
  • .msg: Outlook Message
  • .odt: Open Document Text
  • .pdf: Portable Document Format (PDF)
  • .pptx: PowerPoint Document
  • .ppt: PowerPoint Document
  • .txt: Text file (UTF-8)

How It Works

PrivateGPT leverages local models and the power of LangChain to run the entire pipeline locally, without any data leaving your environment, and with reasonable performance.

  • ingest.py uses LangChain tools to parse the document and create embeddings locally using HuggingFaceEmbeddings (SentenceTransformers). It then stores the result in a local vector database using Chroma vector store.
  • privateGPT.py uses a local LLM based on GPT4All-J or LlamaCpp to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs.

System Requirements

Python Version

To use this software, you must have Python 3.10 or later installed. Earlier versions of Python will not compile.

C++ Compiler

If you encounter an error while building a wheel during the pip install process, you may need to install a C++ compiler on your computer. Follow the instructions for your operating system to install the appropriate compiler.

privateGPT AI technology page Hackathon projects

Discover innovative solutions crafted with privateGPT AI technology page, developed by our community members during our engaging hackathons.

Trading-Agent-

Trading-Agent-

A trading agent AI is an artificial intelligence system that uses computational intelligence methods such as machine learning and deep reinforcement learning to automatically discover, implement, and fine-tune strategies for autonomous adaptive automated trading in financial markets This project implements a Stock Trading Bot, trained using Deep Reinforcement Learning, specifically Deep Q-learning. Implementation is kept simple and as close as possible to the algorithm discussed in the paper, for learning purposes. Generally, Reinforcement Learning is a family of machine learning techniques that allow us to create intelligent agents that learn from the environment by interacting with it, as they learn an optimal policy by trial and error. This is especially useful in many real world tasks where supervised learning might not be the best approach due to various reasons like nature of task itself, lack of appropriate labelled data, etc. The important idea here is that this technique can be applied to any real world task that can be described loosely as a Markovian process. This work uses a Model-free Reinforcement Learning technique called Deep Q-Learning (neural variant of Q-Learning). At any given time (episode), an agent abserves it's current state (n-day window stock price representation), selects and performs an action (buy/sell/hold), observes a subsequent state, receives some reward signal (difference in portfolio position) and lastly adjusts it's parameters based on the gradient of the loss computed. There have been several improvements to the Q-learning algorithm over the years, and a few have been implemented in this project: Vanilla DQN DQN with fixed target distribution Double DQN Prioritized Experience Replay Dueling Network Architectures Trained on GOOG 2010-17 stock data, tested on 2019 with a profit of $1141.45 (validated on 2018 with profit of $863.41):