Text Generation Web UI AI technology page Top Builders

Explore the top contributors showcasing the highest number of Text Generation Web UI AI technology page app submissions within our community.

Text Generation Web UI

The Text Generation Web UI is a Gradio-based interface for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA. It provides a user-friendly interface to interact with these models and generate text, with features such as model switching, notebook mode, chat mode, and more. The project aims to become the go-to web UI for text generation and is similar to AUTOMATIC1111/stable-diffusion-webui in terms of functionality.


  • Dropdown menu for switching between models
  • Notebook mode that resembles OpenAI's playground
  • Chat mode for conversation and role-playing
  • Instruct mode compatible with various formats, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, and MOSS
  • Nice HTML output for GPT-4chan
  • Markdown output for GALACTICA, including LaTeX rendering
  • Custom chat characters
  • Advanced chat features (send images, get audio responses with TTS)
  • Efficient text streaming
  • Parameter presets
  • Layers splitting across GPU(s), CPU, and disk
  • CPU mode
  • and much more!


There are different installation methods available, including one-click installers for Windows, Linux, and macOS, as well as manual installation using Conda. Detailed installation instructions can be found in the Text Generation Web UI repository.

Downloading Models

Models should be placed inside the models folder. You can download models from Hugging Face, such as Pythia, OPT, GALACTICA, and GPT-J 6B. Use the download-model.py script to automatically download a model from Hugging Face.

Starting the Web UI

After installing the necessary dependencies and downloading the models, you can start the web UI by running the server.py script. The web UI can be accessed at http://localhost:7860/?__theme=dark. You can customize the interface and behavior using various command-line flags.

System Requirements

Check the wiki for examples of VRAM and RAM usage in both GPU and CPU mode.


Pull requests, suggestions, and issue reports are welcome. Before reporting a bug, make sure you have followed the installation instructions provided and searched for existing issues.

Text Generation Web UI AI technology page Hackathon projects

Discover innovative solutions crafted with Text Generation Web UI AI technology page, developed by our community members during our engaging hackathons.

Multilingual Speech Recognizer and AI Assistant

Multilingual Speech Recognizer and AI Assistant

Overview: 1) Python Programming: Leveraging the versatility and robustness of Python, we've built a solid foundation for our speech recognizer and assistant, ensuring flexibility and scalability. 2) OPENAI API Integration: Empowering our assistant with the capabilities of the OPENAI API enables it to comprehend, process, and respond to queries across a spectrum of languages and topics. 3) Google Recognizer for Voice-to-Text: By utilizing Google's advanced speech recognition technology, we achieve accurate and efficient transcription of spoken words into text, forming the basis for seamless interaction. 4) Streamlit for Deployment: Deploying our solution using Streamlit provides an intuitive and user-friendly interface, making interaction effortless and accessible to users across diverse platforms. Advantages: Multilingual Mastery: Breaks language barriers, catering globally. AI-Powered Precision: Learns, adapts, and delivers tailored responses. Efficiency Booster: Swift voice interaction, enhancing productivity. Market Demand: The market demands seamless communication solutions that transcend language barriers and facilitate efficient interaction. Our Multilingual Speech Recognizer & AI Assistant addresses this demand by offering a versatile, intelligent, and accessible platform. Conclusion: In the dynamic landscape of communication technology, our Multilingual Speech Recognizer & AI Assistant stands as a testament to innovation and progress. With its multilingual competence, AI-powered assistance, and user-friendly deployment, it heralds a new era of effortless communication and interaction, catering to the evolving needs of a diverse global audience.

Building Your Own Jarvis

Building Your Own Jarvis

JARVIS acts as an intelligent intermediary between users and a network of specialized agents. When a user interacts with the system, their message is directed to JARVIS as the primary point of contact. This initial step is where the magic begins to unfold. After understanding the user need. JARVIS navigates through a repository of specialized agents, each programmed to excel in specific tasks. Whether it's fetching information, performing calculations, or executing complex actions, JARVIS knows just the right agent for the job. Upon identifying the ideal agent, JARVIS initiates a seamless handover. The chosen agent becomes active, taking on the responsibility of fulfilling the user's request. This activation process extends to both the frontend and backend components, ensuring a cohesive and synchronized interaction between the user, JARVIS, and the chosen agent. Rather than users needing to interact with multiple agents individually, JARVIS simplifies the experience by acting as a gatekeeper. Users interact with a single point of contact, making their queries and requests in natural language, while JARVIS handles the intricate orchestration behind the scenes. To exhibit our system's potential, we've crafted a user-friendly web interface, sidestepping authentication complexities. Inside, two prototype agents—"music" and "call"—showcase our concept's prowess. As we look towards the future, our vision encompasses the integration of an expanding repertoire of specialized agents. This entails leveraging the power of prompt engineering to craft prompts that elicit precise and effective responses from the agents. By refining these prompts and training the agents, we aim to elevate the system's accuracy and versatility, enabling it to address an ever-widening array of user needs and inquiries.