PaLM2 Tutorial: Building Character-based Chatbot App using Powerful AI Model

Friday, June 23, 2023 by septian_adi_nugraha408
PaLM2 Tutorial: Building Character-based Chatbot App using Powerful AI Model


Introduction to PaLM2 Model

PaLM 2 is Google's next-generation large language model that builds on their legacy of breakthrough research in machine learning and responsible AI. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency, and natural language generation better than previous state-of-the-art LLMs, including PaLM.

PaLM 2 can decompose a complex task into simpler subtasks and is better at understanding nuances of the human language than previous LLMs, like PaLM. For example, PaLM 2 excels at understanding riddles and idioms, which requires understanding ambiguous and figurative meaning of words, rather than the literal meaning.

PaLM 2 was pre-trained on parallel multilingual text and on a much larger corpus of different languages than its predecessor, PaLM. This makes PaLM 2 excel at multilingual tasks.

PaLM 2 was pre-trained on a large quantity of webpage, source code and other datasets. This means that it excels at popular programming languages like Python and JavaScript, but is also capable of generating specialized code in languages like Prolog, Fortran, and Verilog. Combining this with its language capabilities can help teams collaborate across languages.

PaLM 2 excels at tasks like advanced reasoning, translation, and code generation because of how it was built. It improves upon its predecessor, PaLM, by unifying three distinct research advancements in large language models:

  • Use of compute-optimal scaling: The basic idea of compute-optimal scaling is to scale the model size and the training dataset size in proportion to each other. This new technique makes PaLM 2 smaller than PaLM, but more efficient with overall better performance, including faster inference, fewer parameters to serve, and a lower serving cost.
  • Improved dataset mixture: Previous LLMs, like PaLM, used pre-training datasets that were mostly English-only text. PaLM 2 improves on its corpus with a more multilingual and diverse pre-training mixture, which includes hundreds of human and programming languages, mathematical equations, scientific papers, and web pages.
  • Updated model architecture and objective: PaLM 2 has an improved architecture and was trained on a variety of different tasks, all of which helps PaLM 2 learn different aspects of language.

PaLM 2 achieves state of the art results on reasoning benchmark tasks such as WinoGrande and BigBench-Hard. It is significantly more multilingual than the previous large language model, PaLM, achieving better results on benchmarks such as XSum, WikiLingua and XLSum. PaLM 2 also improves translation capability over PaLM and Google Translate in languages like Portuguese and Chinese.

PaLM 2 continues Google's responsible AI development and commitment to safety. It was evaluated rigorously for its potential harms and biases, capabilities and downstream uses in research and in-product applications. Itโ€™s being used in other state-of-the-art models, like Med-PaLM 2 and Sec-PaLM, and is powering generative AI features and tools at Google, like Bard and the PaLM API.

For more information, you can visit the official PaLM 2 page.

Introduction to Tag Usage for Structuring Model Responses

When working with AI models, it's often useful to structure the output in a way that makes it easier to parse and use in your application. One way to do this is by using tags in your prompts that the model will then include in its responses. This is similar to using HTML or XML tags to structure data.

For example, you might use tags like <char-style></char-style>, <bio></bio>, and <name></name> to indicate different parts of the model's response. Here's how you might use these tags in a prompt:

prompt = """
Please generate a character for a fantasy novel.

<name>Name of the character</name>

<bio>A brief biography of the character</bio>

<char-style>The character's speaking and behavior style</char-style>

In this prompt, the model is instructed to generate a character name, a brief biography, and a description of the character's speaking and behavior style. Each of these pieces of information is enclosed in its own set of tags.

When the model generates its response, it might look something like this:

response = """
<name>Thorgar the Mighty</name>

<bio>Thorgar is a fearsome warrior from the northern lands, known for his strength and bravery. He was raised by wolves and is now the leader of his own tribe.</bio>

<char-style>Thorgar speaks in a gruff, commanding voice and is known for his directness. He is not one for subtlety and prefers to solve problems with his axe.</char-style>

You can then parse this response using Python's built-in re module to extract the information contained within each set of tags. This makes it easy to use this information in your application, whether you're displaying it to the user, using it to guide further interactions with the model, or storing it for later use.

Here's an example of how you might parse this response in Python:

import re

name ='<name>(.*?)</name>', response).group(1)
bio ='<bio>(.*?)</bio>', response).group(1)
char_style ='<char-style>(.*?)</char-style>', response).group(1)

print(f"Name: {name}")
print(f"Bio: {bio}")
print(f"Character Style: {char_style}")

This would output:

Name: Thorgar the Mighty
Bio: Thorgar is a fearsome warrior from the northern lands, known for his strength and bravery. He was raised by wolves and is now the leader of his own tribe.
Character Style: Thorgar speaks in a gruff, commanding voice and is known for his directness. He is not one for subtlety and prefers to solve problems with his axe.

As you can see, using tags in your prompts can be a powerful way to structure the output of your AI models, making it easier to work with in your applications.

Introduction to ReactJS

ReactJS, commonly referred to as React, is an open-source JavaScript library for building user interfaces or UI components. It was developed by Facebook and is maintained by Facebook and a community of individual developers and companies. React can be used as a base in the development of single-page or mobile applications.

React allows developers to create large web applications that can update and render efficiently in response to data changes, without requiring a page reload. The main purpose of React is to be fast, scalable, and simple. It works only on user interfaces in the application, which makes it easy to integrate with other libraries or existing projects.

React uses a virtual DOM (Document Object Model), which improves the performance of the app since JavaScript virtual DOM is faster than the regular DOM. React can also render on the server using Node, and it can power native mobile applications using a variant called React Native.

React implements one-way data flow which reduces the boilerplate and is easier to understand than traditional data binding.

Introduction to Flask

Flask is a micro web framework written in Python. It is classified as a microframework because it does not require particular tools or libraries. It has no database abstraction layer, form validation, or any other components where pre-existing third-party libraries provide common functions.

However, Flask supports extensions that can add application features as if they were implemented in Flask itself. Extensions exist for object-relational mappers, form validation, upload handling, various open authentication technologies and several common framework related tools.

Flask is easy to get started with as a beginner because there is little boilerplate code for getting a simple app up and running. For a more advanced application, you'll want to use a specific project layout that can help keep things organized as your application becomes more complex.

Flask is also widely used for its simplicity, flexibility and fine-grained control. It is a popular choice for both small and large applications and is particularly good for tight integration with frontend JavaScript frameworks like React.


  • Basic knowledge and intuition of prompt engineering
  • Basic knowledge of app development using ReactJS and Typescript
  • Basic knowledge of Python and Flask framework


  1. Preparing the Development Environment
  2. Engineering the Prompt and Testing It
  3. Incorporate the Prompt into the Backend
  4. Testing the Backend
  5. Building the Front-End for the Chatbot App
  6. Testing the Conversation with Yoda Chatbot


Preparing the Development Environment

Before we start building our application, we need to set up our development environment. This involves initializing our backend and frontend projects.

Initializing the Backend Project

Our backend will be built using Flask, a lightweight and flexible Python web framework. Here are the steps to initialize the backend project:

  1. Create a new directory for your project. You can name it anything you like. Navigate into it using the command line.

    mkdir palm2-charbot-backend
    cd palm2-charbot-backend
  2. Set up a virtual environment. This is a self-contained environment where you can install the Python packages needed for your project without interfering with the packages installed in your system-wide Python. You can create a virtual environment using the following commands:

    python3 -m venv venv
    source venv/bin/activate
  3. Install Flask. With your virtual environment activated, you can install Flask using pip, the Python package installer:

    pip install flask
  4. Create a new file named This will be the main file for your Flask application. For now, you can leave it empty.

  5. Run your Flask application. You can start your Flask application using the following command:

    flask run

    If everything is set up correctly, you should see output indicating that your application is running and listening for connections.

Initializing the Frontend Project

Our frontend will be built using React, a popular JavaScript library for building user interfaces. Here are the steps to initialize the frontend project:

  1. Install Node.js and npm. Node.js is a JavaScript runtime that allows you to run JavaScript code outside of a web browser. npm (Node Package Manager) is a tool that comes with Node.js and allows you to install JavaScript packages. You can download Node.js and npm from the official website.

  2. Install Create React App. Create React App is a tool that sets up a modern web application by running one command. You can install it globally using the following command:

    npm install -g create-react-app
  3. Create a new React application. Navigate to the directory where you want to create your application and run the following command:

    npx create-react-app palm-charbot

    We'll call the app "palm-charbot" which is a portmanteau of "character" and "bot".

  4. Start your React application. Navigate into your new application's directory and start the application:

    cd palm-charbot
    npm start

    Your application should now be running and accessible in your web browser.

With our backend and frontend projects initialized, we can now start building our chatbot application.

Engineering the Prompt and Testing It

In this part, we use MakerSuite for our prompt engineering and testing purpose. MakerSuite is a tool provided by OpenAI for training and testing language models. MakerSuite has two main APIs that we can use: generate_text() and chat().

  1. generate_text(): This API is used for generating text based on a given prompt. It can be used in two distinct ways:

    • Text Prompt (Completion and Text Generation): In this use case, we provide a text prompt to the API, and it generates a continuation of the text. This is useful for tasks like writing an essay, generating a story, or completing a sentence.

    • Data Prompt (Text Generation with Examples Data): In this use case, we provide a data prompt, which includes examples of the desired output. The API uses these examples to generate a similar output. This is useful for tasks where we want the output to follow a specific format or style. We can also provide a custom context to further adjust the output.

  2. chat(): This API is used for generating conversational responses. We provide example dialogues to the API, which it uses to generate a response in a conversational style. This is useful for building chatbots or virtual assistants. Like with the generate_text() API, we can also provide a custom context to influence the output.

In this tutorial, we use MakerSuite to test our prompts and see the responses generated by the model. This allows us to fine-tune our prompts and ensure that they produce the desired output. For more information on how to use MakerSuite, you can refer to the MakerSuite Quickstart Guide.

Please note that you might need to join the waitlist if you haven't already gained access to MakerSuite. If everything's ready and you can access the home page of MakerSuite, let's get started!

the home page of MakerSuite

On the home page of MakerSuite, we'll see three main menus:

  1. Text Prompt
  2. Data Prompt
  3. Chat Prompt

Essentially, as we'll explore later in the Python code, Text and Data Prompts are the same, using the same generate_text() function and text model. The difference lies in how the Data Prompt is geared towards generating responses that follow certain patterns in the data, which are arranged neatly in a tabular manner.

Let's get started composing our prompts to power our chatbot, which will take on the personalities of popular characters from movies, books, or video games, based on our input!

Using Text Prompt to Generate the Character Details

In this section, we'll start by creating a prompt that instructs the AI to generate the details of our chatbot's character. To do this, click on the "Create" button in the "Text prompt" section on the MakerSuite home page.

Text prompt input

In MakerSuite, a "Text Prompt" is a set of instructions that guides the AI in generating text. For our chatbot, we'll need the AI to:

  1. Assume the personality of a popular character, based on our input.
  2. Generate example dialogues that showcase the unique style and quirks of the character. These will be used later by the chat() API.
  3. Generate character details, such as a Twitter-style bio, character style, and the name of the character.
  4. Format the character details in XML-like tags. This will make it easier for us to extract specific pieces of information from the AI's response.

Here's the prompt we'll use:

The bot will assume the character of {{character}} and will speak and behave accordingly. Please give me: 
1. 5 examples of example dialogues which show the unique style and quirks of the character, wrapped in a single &lt;examples&gt;&lt;dialogue&gt;&lt;user&gt;&lt;/user&gt;&lt;bot&gt;&lt;/bot&gt;&lt;/dialogue&gt;&lt;/examples&gt; tag, like so: &lt;examples&gt;&lt;dialogue&gt;&lt;user&gt;Greetings&lt;/user&gt;&lt;bot&gt;Well met&lt;/bot&gt;&lt;/dialogue&gt;&lt;/examples&gt;.
2. The Twitter bio of the character, string format and wrapped in &lt;bio&gt;&lt;/bio&gt; tag. 
3. The speech or mannerism that makes the character unique, string format and wrapped in &lt;char-style&gt;&lt;/char-style&gt;.
4. The character name, string format and wrapped in &lt;name&gt;&lt;/name&gt; tag.
Let&#39;s begin 

After typing or pasting this prompt into the input field, click the "Run" button or press CTRL + Enter.

the prompt pasted in, and the double curly brackets is processed as input variable!

MakerSuite will automatically process the "character" word within the double curly brackets as an input variable. An additional table input will appear below our prompts, where we can input the character's name.

For this tutorial, we'll use "Yoda" as our character. As we type the name into the "INPUT" column, the prompt will adjust accordingly.

The input adjusted the overall prompt

Once we've input the character's name, we can run our prompt by clicking the "Run" button or pressing CTRL + Enter. The AI's response will appear in the "OUTPUT" column.

the AI responded by giving the examples and details surrounded by tags, as requested

As you can see, the AI has generated example dialogues and character details as requested, and formatted them using XML-like tags. This formatting allows us to use the AI's response programmatically, meaning we can use it as input for our code. In the next section, we'll show you how to process and extract the content of these tags, and return them in a format that's easy to use in the front-end, such as JSON.

Incorporating the Prompt into the Backend

Alright, as we probably learned already, an AI model is only good if we can put it into beneficial, real-world use cases. So, in this section we'll take it further into our "production" version. Let's go back to our backend project. Open up the terminal and change directory to your project, make sure you've already got your environment activated, and start installing these libraries:

# Install dotenv library to store and obtain our API key safely in our private .env file
pip install python-dotenv
# The Google Generative AI library, an SDK which we can use to connect to Google's PaLM2 model via our codes
pip install google-generativeai
# Freeze the dependencies in a requirements.txt file
pip freeze > requirements.txt

After that, let's create our .env file, and type in our variable name in which we store our API key.

# .env

Wait, where did we get the API key anyway? Good question! head to this URL and choose "Create API key in new project". A pop up should appear, in which you can copy the API key using the "Copy" button, or just highlight the text and copy it with right-click context menu or using CTRL+C keys. After we're done, copy it in our .env file.

Next, let's edit our file! in this file, we will add two endpoints, which are /detail and /chat endpoints. The handler of /detail endpoint will run our generated prompt from MakerSuite that we used earlier, while the /chat endpoint will run another, much simpler prompt, thanks to the heavy lifting already done by the /detail endpoint. The difference between the two is the model to use, and the API endpoints associated with each model.

  1. Importing the necessary libraries and initializing the Flask app

    Here, you're importing the necessary libraries and initializing your Flask app. You're also defining a route for the home page of your app.

    import os
    import re
    from flask import Flask, request, jsonify
    import google.generativeai as palm
    app = Flask(__name__)
    def home():
        return "Hello, World!"
  2. Defining the route for getting character details

    This route is where you'll send a POST request to get the details of a character. You're using the generate_text() function from the palm library to generate the character details based on the prompt you've defined. We define our model, text-bison-001 and pass our API key into configure() function.

    @app.route('/detail', methods=['POST'])
    def get_char_detail():
      data = request.get_json()
      defaults = {
      'model': 'models/text-bison-001',
      'temperature': 0.7,
      'candidate_count': 1,
      'top_k': 40,
      'top_p': 0.95,
      'max_output_tokens': 1024,
      'stop_sequences': [],
      num_examples = 5
      prompt_for_example = f"{num_examples} examples of example dialogues which show the unique style and quirks of the character, wrapped in a single <examples><dialogue><user></user><bot></bot></dialogue></examples> tag, like so: <examples><dialogue><user>Greetings</user><bot>Well met</bot></dialogue></examples>."
      prompt = f"""The bot will assume the character of {data['character']} and will speak and behave accordingly. Please give me: 
      1. {prompt_for_example}
      2. The Twitter bio of the character, string format and wrapped in <bio></bio> tag. 
      3. The speech or mannerism that makes the character unique, string format and wrapped in <char-style></char-style>.
      4. The character name, string format and wrapped in <name></name> tag.
      Let's begin 
      1. """
      response = palm.generate_text(
      if response.result != None:
         bio_match ='<bio>(.*?)</bio>', response.result, re.DOTALL)
         name_match ='<name>(.*?)</name>', response.result, re.DOTALL)
         char_style_match ='<char-style>(.*?)</char-style>', response.result, re.DOTALL)
         dialogue_matches = re.findall(r'<dialogue>(.*?)</dialogue>', response.result, re.DOTALL)
         examples = []
         for dialogue in dialogue_matches:
               # Extract the content of the <user> and <bot> tags within the dialogue
               user_match ='<user>(.*?)</user>', dialogue, re.DOTALL)
               bot_match ='<bot>(.*?)</bot>', dialogue, re.DOTALL)
               # If both matches were found, add a dictionary to the examples list
               if user_match and bot_match:
         # If a match was found, get the first group (the content of the tag)
         bio = if bio_match else None
         name = if name_match else None
         char_style = if char_style_match else None
         result_data = {
               "name": name,
               "bio": bio,
               "char_style": char_style,
               "example_chats": examples
         # Create a dictionary with 'status' and 'result' fields
         response_data = {
               'status': 'success',  # Or any other status you want to return
               'result': result_data
         response_data = {
               'status': 'failed',  # Or any other status you want to return
               'result': "No result"
      return jsonify(response_data)

    In our application, we've chosen to use XML-like tags to structure the data returned by the AI model. These tags serve as markers that help us segregate the different pieces of information in the response. For example, we use <bio></bio> tags to encapsulate the character's bio, <name></name> for the character's name, and so on. This approach allows us to easily extract the data we need from the AI's response using regular expressions. Once extracted, we can then insert these values into a JSON response object. This object is then sent back to the client, providing a structured and easily consumable format for the front-end of our application.

  3. Defining the route for sending a chat message

    This route is where you'll send a POST request to send a chat message. You're using the chat() function from the palm library to generate a response from the chatbot based on the message and context you've provided. We also use the chat-bison-001 model, as opposed to text-bison-001.

    @app.route("/chat", methods=["POST"])
    def send_chat():
      data = request.get_json()
      defaults = {
      'model': 'models/chat-bison-001',
      'temperature': 0.25,
      'candidate_count': 1,
      'top_k': 40,
      'top_p': 0.95,
      context = f"Assume the character of {data['character']}, you will speak with the same style and mannerism as the character. You will never, ever mention anything about being AI or large language model, instead insisting to stay in character. Notable style: {data['style']}."
      print("context:", context)
      examples = data['examples']
      messages = data['history']
      response =
      print(response.last) # Response of the AI to your most recent request
      if response.last != None:
         response_data = {
               'status': 'success',  # Or any other status you want to return
               'result': response.last 
         response_data = {
               'status': 'failed',  # Or any other status you want to return
               'result': 'No result' 
      return jsonify(response_data)
  4. Running the Flask app

    Finally, you're running your Flask app. You've set debug=True so that the server will automatically reload if you make any changes to your code, and it will provide detailed error messages if anything goes wrong.

    if __name__ == '__main__':

Now, let's run our backend server. In your terminal, execute the command flask run (or python, depending on your setup). If everything is configured correctly, your terminal should display an output similar to this:

the output of flask run

This output indicates that our server is up and running, ready to listen for incoming requests on localhost port 5000.

To test our endpoints, we'll use a tool called Insomnia. Insomnia is a REST client that allows us to send HTTP requests to our server and view the responses. It's a handy tool for testing and debugging our server endpoints. In the next section, we'll go over how to use Insomnia to send requests to our server.

Testing the Backend

Fire up your REST API tester, which in this case, I use Insomnia. Let's set up the JSON payload, URL, and HTTP method as shown below.

the request specification of our /detail endpoint

Alternatively, you can copy the following cURL command and paste it into the address bar of the endpoint in Insomnia. The software will automatically parse the payload for you.

curl --request POST \
  --url http://localhost:5000/detail \
  --header &#39;Content-Type: application/json&#39; \
  --data &#39;{&quot;character&quot;: &quot;Yoda&quot;}&#39;

Let's try calling this endpoint! If you read our code earlier, you should probably guess that it will return the JSON of the character's bio, style, example chats, and name.

the response of our request for Yoda's character detail

Sweet! Notice how the details are neatly arranged in the JSON response. This is impressive considering that the data is generated by an AI model, which is inherently open-ended. The power of the PaLM2 model, combined with our prompt engineering, enables us to build a service that leverages AI for creative tasks. We use standardized, predictable inputs to generate desired outputs, which are structured according to the format we declared in our JSON object.

Next, let's try our /chat endpoint. To be fair, in a more ideal setting, we'll automatically populate the rest of the parameters in our front-end app, only requiring us to provide the character's name. For testing purposes, let's call the /chat endpoint using the parameters specified below.

the parameter for our /chat endpoint

Or, just like before, we can always use this cURL command and paste it in Insomnia.

curl --request POST \
  --url http://localhost:5000/chat \
  --header &#39;Content-Type: application/json&#39; \
  --data &#39;{&quot;character&quot;:&quot;Yoda&quot;,&quot;style&quot;:&quot;Speaks in a Yoda-esque manner, using the inverted word order of \&quot;subject-verb-object\&quot;.&quot;,&quot;examples&quot;:[[&quot;I&#39;\&#39;&#39;m leaving.&quot;,&quot;You can&#39;\&#39;&#39;t escape me.&quot;]],&quot;history&quot;:[],&quot;message&quot;:&quot;Hey there&quot;}&#39;

When you're ready, let's test this endpoint with the "Send" button.

the response of our /chat endpoint

Sweet! The backend returned the chat response. However, at this point, the response doesn't quite sound like Yoda, does it? This is because the chat variant of the PaLM2 model heavily favors the example chats over its own training data. This is somewhat a good news, as we'll be able to complement the model with more example data to influence the output more.

To make it easier to generate example chats, let's proceed to develop our front-end app. We'll delve deeper into front-end development in the next section.

Building the Front-End for the Chatbot App

We're going to build the front-end of our application using React. Let's start with the App.tsx file.


First, let's import the necessary modules and components:

import React, { useEffect, useState } from 'react';
import ChatHistory, { ChatItem } from './components/ChatHistory';
import CharacterInput from './components/CharacterInput';
import DialogueContainer from './components/DialogueContainer';
import SendMessage from './components/SendMessage';
import Collapsible from 'react-collapsible';

Next, we define a helper function to get the current timestamp:

function getCurrentTimestamp() {
  const now = new Date();
  let hours = now.getHours();
  const minutes = now.getMinutes();
  const ampm = hours >= 12 ? 'PM' : 'AM';
  hours %= 12;
  hours = hours || 12; // Convert to 12-hour format
  const formattedHours = hours.toString().padStart(2, '0');
  const formattedMinutes = minutes.toString().padStart(2, '0');

  return `${formattedHours}:${formattedMinutes} ${ampm}`;

We also define some interfaces to help with type checking:

interface Dialogue {
  user: string;
  bot: string;

interface CharacterDetails {
  name: string,
  bio: string;
  char_style: string;
  example_chats: Dialogue[];

Then, we define our main App component:

function App() {
  const [character, setCharacter] = useState('');
  const [dialogues, setDialogues] = useState<ChatItem[]>([]);
  const [isLoading, setIsLoading] = useState(false)
  // Add a new state variable for the example dialogues
  const [characterDetails, setCharacterDetails] = useState<CharacterDetails | null>(null);
  const [characterSubmitted, setCharacterSubmitted] = useState(false);

We define a function to handle starting a new session:

const handleNewSession = () => {

We use the useEffect hook to fetch character details when the character state changes:

useEffect(() => {
    const fetchCharacterDetails = async () => {
      const response = await fetch('/detail', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json'
        body: JSON.stringify({
          character: character

      if (response.ok) {
        const data = await response.json();

    if (character) {
  }, [character]);

We define a function to handle sending a new message:

const handleNewMessage = async (message: string) => {
    // Add the user's message to the dialogues immediately
    setDialogues(prevDialogues => [...prevDialogues, { sender: 'user', message, timestamp: getCurrentTimestamp() }]);


    // Prepare the request body
    if (characterDetails) {
      const exampleChatsFormatted = => [chat.user,]);
      const requestBody = {
        style: characterDetails.char_style,

        examples: exampleChatsFormatted,
        history: => dialogue.message),
        message: message

      // Send a POST request to the /chat endpoint
      const response = await fetch('/chat', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json'
        body: JSON.stringify(requestBody)

      // Parse the response
      const responseData = await response.json();

      // Use the response data to update the dialogues
      setDialogues(prevDialogues => [...prevDialogues, { sender: 'bot', message: responseData.result, timestamp: getCurrentTimestamp() }]);

Finally, we render our components:

return (
    <div id="app" className="h-screen bg-gray-200 flex flex-col items-center justify-center p-4">
      <h1 className="text-4xl mb-6 text-center font-bold text-blue-500">CharBot</h1>
      <div className="bg-white p-8 max-h-[calc(100vh-8rem)] rounded-xl shadow-lg w-full max-w-4xl space-y-4">
        {!characterSubmitted &&
          <CharacterInput setCharacter={setCharacter} isLoading={isLoading} disabled={characterSubmitted} />
        {characterSubmitted && (
          <button onClick={handleNewSession} className="px-4 py-2 bg-blue-500 text-white font-bold rounded-md hover:bg-blue-600">New Session</button>
        {characterDetails && (
            <p><strong>Name:</strong> {character}</p>
            <p><strong>Bio:</strong> {}</p>
            <p><strong>Character Style:</strong> {characterDetails.char_style}</p>
            <Collapsible trigger="Example Dialogues">
              {, index) => (
                <div key={index}>
                  <p><strong>User:</strong> {dialogue.user}</p>
                  <p><strong>Bot:</strong> {}</p>
        <ChatHistory chatItems={dialogues} isLoading={isLoading} />
        <SendMessage handleNewMessage={handleNewMessage} />

export default App;

This is a good start! Let's break it down:


This component is responsible for allowing the user to input the character they want the chatbot to emulate. It's a simple form with an input field and a submit button.

First, we import the necessary modules and define the props for our component:

import React, { ChangeEvent, FormEvent, useState } from 'react';

interface CharacterInputProps {
  setCharacter: (value: string) => void;
  disabled: boolean;
  isLoading: boolean

We define our CharacterInput component and use the useState hook to manage the state of the input field:

const CharacterInput: React.FC<CharacterInputProps> = ({ setCharacter, disabled, isLoading }) => {
  const [inputValue, setInputValue] = useState('');

We define a handleSubmit function that will be called when the form is submitted. This function prevents the default form submission behavior and calls the setCharacter function passed as a prop:

const handleSubmit = (e: FormEvent) => {
    if (!disabled) {

Finally, we return the JSX for our component. This includes a form with an input field and a submit button. The input field's value is tied to our inputValue state, and its onChange handler updates inputValue whenever the user types into the field. The submit button is disabled if the disabled prop is true:

return (
    <form onSubmit={handleSubmit} className="flex items-center space-x-4">
      <label htmlFor="character" className="font-bold">Set chatbot character:</label>
        onChange={(e: ChangeEvent<HTMLInputElement>) => setInputValue(}
        className="w-full px-4 py-2 border border-gray-300 rounded-md"
      {isLoading && (
        <svg className="animate-spin h-5 w-5 mr-3 ..." viewBox="0 0 24 24">
      <button type="submit" className="px-4 py-2 bg-blue-500 text-white font-bold rounded-md hover:bg-blue-600" disabled={disabled}>Set</button>

export default CharacterInput;


This component is responsible for displaying the history of the chat. It takes an array of chat items as a prop and maps over them to create a list of chat messages.

First, we import the necessary modules and define the props for our component:

import React from 'react';

export interface ChatItem {
  sender: 'user' | 'bot';
  message: string;
  timestamp: string;

interface ChatHistoryProps {
  chatItems: ChatItem[];
  isLoading: boolean

We define our ChatHistory component:

const ChatHistory: React.FC<ChatHistoryProps> = ({ chatItems, isLoading }) => {

We return the JSX for our component. This includes a div that contains a list of chat messages. Each chat message is a div that contains the message text and the timestamp. The sender property of each chat item is used to conditionally apply CSS classes to each chat message:

return (
      className={`p-4 space-y-4 h-96 overflow-auto bg-gray-100 shadow-inner ${chatItems.length === 0 ? 'h-20' : ''
      {, index) => (
        <div key={index} className={`flex items-start ${item.sender === 'user' ? 'justify-end' : ''}`}>
          <div className={`rounded-lg px-4 py-2 ${item.sender === 'user' ? 'bg-blue-500 text-white' : 'bg-gray-300 text-gray-800'}`}>
            <div className="text-right text-xs mt-1">{item.timestamp}</div>
      {isLoading && (
        <div className="flex items-start">
          <div className="rounded-lg px-4 py-2 bg-gray-300 text-gray-800">

export default ChatHistory;


This component is responsible for managing and displaying a list of dialogue items. It provides functionality to add new dialogue items and to update existing ones.

First, we import the necessary modules and define the props for our component:

import React, { useState } from 'react';
import DialogueItem from './DialogueItem';

interface DialogueContainerProps {
  dialogues: { user: string; bot: string }[];
  setDialogues: React.Dispatch<React.SetStateAction<{ user: string; bot: string }[]>>;

We define our DialogueContainer component:

const DialogueContainer: React.FC<DialogueContainerProps> = ({ dialogues, setDialogues }) => {

We define a state variable to keep track of whether the dialogue container is collapsed or expanded:

const [isCollapsed, setIsCollapsed] = useState<boolean>(true);

We define a function to add a new row to the dialogue container:

const handleAddRow = () => {
    setDialogues([...dialogues, { user: '', bot: '' }]);

We define a function to handle input changes in the dialogue items:

const handleInputChange = (index: number, field: keyof { user: string; bot: string }, value: string) => {
    const updatedDialogues = [...dialogues];
    updatedDialogues[index][field] = value;

We define a function to toggle the collapse state of the dialogue container:

const toggleCollapse = () => {

Finally, we return the JSX for our component. This includes a button to toggle the collapse state of the dialogue container and a list of DialogueItem components:

return (
    <div className="my-8">
      <h2 className="text-xl font-semibold mb-4">Dialogue Container</h2>
      <button onClick={toggleCollapse} className="px-4 py-2 bg-blue-500 text-white font-bold rounded-md hover:bg-blue-600 mb-2">
        {isCollapsed ? 'Expand' : 'Collapse'}
      {!isCollapsed && (
          {, index) => (
            <DialogueItem key={index} item={item} onChange={handleInputChange} index={index} />
          <button onClick={handleAddRow} className="px-4 py-2 mt-4 bg-green-500 text-white font-bold rounded-md hover:bg-green-600">
            Add Row

export default DialogueContainer;


This component is responsible for displaying a single dialogue item and handling changes to its fields.

First, we import the necessary modules and define the props for our component:

import React, { ChangeEvent } from 'react';

interface DialogueItemProps {
  item: { user: string; bot: string };
  onChange: (index: number, field: keyof { user: string; bot: string }, value: string) => void;
  index: number;

We define our DialogueItem component:

const DialogueItem: React.FC<DialogueItemProps> = ({ item, onChange, index }) => {

We define a function to handle input changes in the dialogue item:

const handleInputChange = (field: keyof { user: string; bot: string }, e: ChangeEvent<HTMLInputElement>) => {
    onChange(index, field,;

Finally, we return the JSX for our component. This includes two input fields for the user message and the bot reply:

return (
    <div className="grid grid-cols-2 gap-4 mb-4">
        placeholder="User message"
        onChange={(e) => handleInputChange('user', e)}
        className="px-4 py-2 border border-gray-300 rounded-md"
        placeholder="Bot reply"
        onChange={(e) => handleInputChange('bot', e)}
        className="px-4 py-2 border border-gray-300 rounded-md"

export default DialogueItem;


This component is responsible for displaying the input field for the user's message and handling the submission of the form.

First, we import the necessary modules and define the props for our component:

import React, { ChangeEvent, FormEvent, useState } from 'react';

interface SendMessageProps {
  handleNewMessage: (message: string) => void;

We define our SendMessage component:

const SendMessage: React.FC<SendMessageProps> = ({ handleNewMessage }) => {

We use the useState hook to create a state variable for the input value:

const [inputValue, setInputValue] = useState('');

We define a function to handle the submission of the form:

const handleSubmit = (e: FormEvent) => {

Finally, we return the JSX for our component. This includes a form with an input field for the user's message and a submit button:

return (
    <form onSubmit={handleSubmit} className="flex items-center space-x-4 mt-auto">
      <label htmlFor="message" className="font-bold">Type your message:</label>
        onChange={(e: ChangeEvent<HTMLInputElement>) => setInputValue(}
        className="w-full px-4 py-2 border border-gray-300 rounded-md"
      <button type="submit" className="px-4 py-2 bg-blue-500 text-white font-bold rounded-md hover:bg-blue-600">Send</button>

export default SendMessage;


To connect the front-end to our running backend, we need to enable the proxy to our backend. Let's edit our package.json file.

  "name": "palm2-charbot",
  "version": "0.1.0",
  "private": true,
++  "proxy": "http://localhost:5000",
  "dependencies": {

That's it! We've built a front-end for our chatbot app using React. The front-end sends requests to the back-end, which uses the PaLM2 API to generate responses. The responses are then displayed in the chat history. Users can start a new session by clicking the "New Session" button.

Testing the Conversation with Yoda Chatbot

Let's test our character-based chatbot, or CharBot. First, ensure your current working directory is inside our front-end project. Then, run this command:

npm start

After a moment, the terminal will display an output indicating the app has successfully compiled and is ready to run. It will also automatically open the app's URL, localhost:3000, in your default browser.

The terminal output, indicating the app has successfully built and is ready to run

Congratulations! Now, let's explore the user interface for our character chatbot app. It includes an input field for the character name, a display area for the chat conversation history, and an input field for the message.

The user interface for our CharBot app

Let's input the name of our character! For this tutorial, we'll instill the character of Yoda from Star Wars into our chatbot. Type "Yoda" in the text input next to the "Set chatbot character:" label. When you're done, click the "Set" button.

Type in the name of Yoda in the input text, the bot should indicate loading

After a moment, the character description for Yoda will appear. Great! In this demonstration, we see how we can directly incorporate the output from an AI model into our production app, thanks to our purposefully designed prompts. We can also view the example dialogues by clicking on the "example dialogues" label.

The character descriptions of Yoda, complete with the example dialogues

Finally, let's try chatting with the CharBot! Let's type in our greeting "Greetings, o master Yoda".

We send our greetings to our charbot

After a moment, the response from our chatbot will be returned. In my case, it responded by saying "Greetings, young one. What brings you to my humble abode?".

the response from the chatbot

Awesome! Even though the bot doesn't sound exactly like Yoda for now, we can later fine-tune the bot with more training data that better represents Yoda's personality. With the PaLM2 model, we're only a few training data points away from giving the bot the character according to our heart's desire.


Congratulations! Throughout this tutorial, we've learned how to build an AI-powered app from the ground up. We started by composing our prompts, testing the accuracy and consistency of the results, and incorporating the prompts into our application backend. Fortunately, the PaLM2 model is a sophisticated AI tool designed for specific purposes, such as text completion or conversational bots.

Using the text generation endpoint, we were able to influence the structure of the bot's response by specifying in our prompts that we needed the response to be arranged and surrounded by XML-like tags. The AI model's response can then be parsed and processed further, and returned to the client in a format familiar to front-end technologies, namely JSON.

In our front-end, we received data on character details. These details, in addition to being displayed in the UI, were also used as payload for our subsequent requests to the /chat endpoint. This allowed us to provide all the necessary data, such as character name, character style, and example chats automatically, by only providing the UI with the character name.

Finally, I'd like to thank you for joining me on this journey. We've crafted prompts to influence the AI model to return responses in the format we specified, opening up more possibilities for building exciting AI-powered apps. You can find the finished projects for the front-end and the backend on my Github. See you in the next tutorial!

Discover tutorials with similar technologies

Upcoming AI Hackathons and Events