Anthropic's Claude and LangChain Tutorial: Bulding Search Powered Personal Assistant App

Wednesday, June 28, 2023 by septian_adi_nugraha408
Anthropic's Claude and LangChain Tutorial: Bulding Search Powered Personal Assistant App

Introduction

Introducing Anthropic's Claude

Anthropic is a research organization focused on developing advanced AI systems. Their latest creation, Claude, is a next-generation AI assistant designed to be helpful, honest, and harmless. This cutting-edge model ensures a high degree of reliability and predictability in various tasks.

Key features of Claude include:

  • Versatile conversational and text processing capabilities
  • Maintaining user safety and privacy as its top priority

Claude's primary use cases are:

  • Summarization
  • Search
  • Creative and collaborative writing
  • Q&A
  • Coding assistance

These features make Claude an ideal AI tool for a breadth of applications, empowering users across different domains.

Introduction to LangChain

LangChain is a versatile tool for building end-to-end language model applications. It provides a robust framework that simplifies the process of creating, managing, and deploying Language Learning Models (LLMs). LLMs are advanced AI models designed to understand, generate, and manipulate human-like text in various languages and tasks.

Key features of LangChain include:

  • Efficient management of prompts for LLMs
  • The ability to create chains of tasks for complex workflows
  • Adding state to the AI, allowing it to remember information from previous interactions

These capabilities make LangChain a powerful and user-friendly platform for harnessing the potential of language models in diverse applications.

Prerequisites

  • Basic knowledge of Python
  • Basic knowledge of Typescipt and/or React
  • Access to Anthropic's Claude API
  • Access to SerpApi's Web Search API

Outline

  1. Initializing the Project
  2. Building the Front-End for an AI Assistant App with Claude and LangCHain
  3. Writing the Project Files
  4. Testing the AI Assistant App

Discussion

Initializing the Project

app.py (Flask app entrypoint) šŸ

  1. Install Flask: To start, make sure you have Flask installed in your environment. You can do this using pip:

    pip install Flask
    
  2. Create a new directory: Create a new directory for your project and navigate to it:

    mkdir claude-langchain
    cd claude-langchain
    
  3. Set up a virtual environment (optional): It's a good practice to use a virtual environment when working with Python projects. You can create one using venv or any other tool of your choice:

    python -m venv venv
    source venv/bin/activate (Linux/Mac)
    venv\Scripts\activate (Windows)
    
  4. Create a main.py file: Create a main.py file to write your Flask application code:

    touch app.py  # Linux/Mac
    echo.>app.py  # Windows
    
  5. Write your Flask application code: Open the main.py file in your favorite code editor and add the following code:

    from flask import Flask
    
    app = Flask(__name__)
    
    @app.route('/')
    def hello_world():
        return 'Hello, World!'
    
    if __name__ == '__main__':
        app.run()
    
  6. Run the Flask application: Save the main.py file and run the following command in your terminal/command prompt:

    python main.py
    
  7. Open your browser: Open your preferred web browser and navigate to http://127.0.0.1:5000/. You should see "Hello, World!" displayed on the webpage.

And that's it! You've successfully initialized a Flask project and created a simple application.

.env šŸŒ

  1. Install python-dotenv and langchain: To easily manage environment variables with a .env file, we'll use the python-dotenv package. At the same time, let's also install langchain. Install both packages using pip:

    pip install python-dotenv langchain
    
  2. Create a .env file: Create a .env file in the root directory of your project:

    touch .env  # Linux/Mac
    echo.>.env  # Windows
    
  3. Add environment variables: Open the .env file in your favorite code editor and add your environment variables. Each variable should be on a new line, in the format KEY=VALUE:

    ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxx
    SERPAPI_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

    Please note that you need to have both API key for Anthropic's Claude model and SerpAPI's web search service.

  4. Load environment variables: Modify your main.py file to load the environment variables from the .env file using python-dotenv. Update your main.py as follows:

    import os
    from flask import Flask
    from dotenv import load_dotenv
    
    load_dotenv()
    
    app = Flask(__name__)
    
    @app.route('/')
    def hello_world():
        return 'Hello, World!'
    
    if __name__ == '__main__':
        app.run()
    
  5. Ignore the .env file in version control: It's important not to share sensitive information like secret keys in version control. If you're using Git, add the following line to your .gitignore file (create one if you don't have it):

    .env

Now, your Flask project is set up to use environment variables from the .env file. You can add more variables as needed and access them using os.environ.get('KEY'). Remember to keep the .env file private and never commit it to version control.

Building the Front-End for an AI Assistant App with Claude and LangChain

This tutorial is designed for intermediate users who have a basic understanding of Node.js, npm, React, and Typescript. We'll be using a stack that includes these technologies, along with Tailwind CSS for styling. This stack was chosen for its robustness, versatility, and the strong community support for these technologies. In addition, we'll be integrating Anthropic's Claude model and LangChain, two powerful AI technologies that will allow our app to generate accurate, human-like text responses.

Installing Node.js and NPM

  1. Download the Node.js installer for your OS from the official site.
  2. Follow the installation prompts to install Node.js and npm. The LTS (Long Term Support) version is recommended for most users.
  3. Once installed, verify the installation by checking the versions of Node.js and npm from your terminal:
node -v
npm -v

Setting Up the Project Environment

Installing Create React App

Create React App (CRA) is a command-line utility that assists us in creating a new React.js application. We'll install it globally via npm:

npm install -g create-react-app
Creating a New React Project with Typescript

We'll use CRA with the Typescript template to create a new project named ai-assistant-claude.

npx create-react-app ai-assistant-claude --template typescript

This command creates a new directory called ai-assistant-claude in our current directory, housing a new React application with Typescript support.

Integrating TailwindCSS

Installing TailwindCSS

The steps followed in this tutorial are based on the official Tailwind CSS documentation. Refer to these docs for more up-to-date instructions.

We'll start by installing TailwindCSS and initializing the library in our project:

npm install -D tailwindcss
npx tailwindcss init
Configuring Template Paths

Next, we configure our template paths by adding them to the tailwind.config.js file. The ++ signifies the lines you'll be adding:

/** @type {import('tailwindcss').Config} */
module.exports = {
--    content: [],
++  content: [
++    "./src/**/*.{js,jsx,ts,tsx}",
++  ],
  theme: {
    extend: {},
  },
  plugins: [],
}
Adding Tailwind Directives

Lastly, we'll add the @tailwind directives for each of Tailwind's layers to our ./src/index.css file:

@tailwind base;
@tailwind components;
@tailwind utilities;

And voila! Tailwind CSS is now integrated into our project.

Installing Required Libraries

Before we proceed to the coding section, let's finalize our preparations by installing the necessary libraries such as fontawesome, react-markdown, axios, and react-hook-form.

Installing FontAwesome for Icons
npm i --save @fortawesome/fontawesome-svg-core
npm install --save @fortawesome/free-solid-svg-icons
npm install --save @fortawesome/react-fontawesome
Installing React Markdown for Rendering Markdown Content
npm install --save react-markdown

With these steps completed, your project is set up with all the necessary tools and libraries. Next

, we'll start building the AI assistant app that uses Anthropic's Claude API and LangChain for generating accurate and human-like text responses.

Troubleshooting

If you run into any issues during installation or setup, here are a few common solutions:

For any other issues, consult the documentation of the respective libraries or post your issues on StackOverflow or relevant GitHub repositories.

Writing the Project Files

In this section, we'll go back to the Flask app we initialized earlier and add new endpoints, such as /ask and /search. These will serve as the endpoints for our simple chat and advanced chat features (the latter being powered by Google search results).

Let's start with importing our necessary modules:

from flask import Flask, jsonify, request
from dotenv import load_dotenv
from langchain.chat_models import ChatAnthropic
from langchain.chains import ConversationChain
from langchain.agents import Tool
from langchain.agents import AgentType
from langchain.utilities import SerpAPIWrapper
from langchain.agents import initialize_agent
from langchain.memory import ConversationBufferMemory
from langchain.prompts.chat import (
    ChatPromptTemplate,
    MessagesPlaceholder,
    SystemMessagePromptTemplate,
    AIMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

load_dotenv()

app = Flask(__name__)

The above section imports all the necessary packages and initializes our Flask application.

Developing the Backend

Creating the Basic Endpoints

We will start by creating a basic endpoint (/) to test if our server is running correctly:

@app.route('/')
def hello_world():
    return 'Hello, World!'

By visiting the root URL, we should receive the response "Hello, World!", indicating that our server is running as expected.

Creating the /ask Endpoint

This endpoint processes messages for the basic chat feature of our application. It reads JSON data from the request, processes the messages, and generates a response using the Anthropic's Claude model and LangChain.

@app.route('/ask', methods=['POST'])
def ask_assistant():
    # The code for /ask endpoint goes here

Extracting Messages from the Request

First, we need to check whether any data is provided and extract the messages from it.

data = request.get_json()
    if not data:
        return jsonify({"error": "No data provided"}), 400

    messages = data.get("message")

Generating the Response

The following code segment generates the chat response using LangChain's ChatAnthropic() model, and the ChatPromptTemplate to structure our conversation. The conversation history is stored using the ConversationBufferMemory.

llm = ChatAnthropic()

    input = ""
    message_list = []
    for message in messages:
        if message['role'] == 'user':
            message_list.append(
                HumanMessagePromptTemplate.from_template(message['content'])
            )
            input = message['content']
        elif message['role'] == 'assistant':
            message_list.append(
                AIMessagePromptTemplate.from_template(message['content'])
            )

    # Adding SystemMessagePromptTemplate at the beginning of the message_list
    message_list.insert(0, SystemMessagePromptTemplate.from_template(
        "The following is a friendly conversation between a human and an AI. The AI is talkative and "
        "provides lots of specific details from its context. The AI will respond with plain string, replace new lines with \\n which can be easily parsed and stored into JSON, and will try to keep the responses condensed, in as few lines as possible."
    ))

    message_list.insert(1, MessagesPlaceholder(variable_name="history"))

    message_list.insert(-1, HumanMessagePromptTemplate.from_template("{input}"))

    prompt = ChatPromptTemplate.from_messages(message_list)

    memory = ConversationBufferMemory(return_messages=True)
    conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
    result = conversation.predict(input=input)

Sending the Response

After generating the response, we replace newlines in the result and return it as a JSON object.

print(result)
    return jsonify({"status": "success", "message": result})
Creating the /search Endpoint

The /search endpoint is similar to /ask but includes search functionality to provide more detailed responses. We use the SerpAPIWrapper to add this feature.

@app.route('/search', methods=['POST'])
def search_with_assistant():
    data = request.get_json()
    if not data:
        return jsonify({"error": "No data provided"}), 400

    messages = data.get("message")

    llm = ChatAnthropic()


    # Get the last message with 'user' role
    user_messages = [msg for msg in messages if msg['role'] == 'user']
    last_user_message = user_messages[-1] if user_messages else None

    # If there is no user message, return an error response
    if not last_user_message:
        return jsonify({"error": "No user message found"}), 400

    input = last_user_message['content']

    search = SerpAPIWrapper()
    tools = [
        Tool(
            name = "Current Search",
            func=search.run,
            description="useful for when you need to answer questions about current events or the current state of the world"
        ),
    ]
    chat_history = MessagesPlaceholder(variable_name="chat_history")
    memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
    agent_chain = initialize_agent(
        tools, 
        llm, 
        agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, 
        verbose=True, 
        memory=memory, 
        agent_kwargs = {
            "memory_prompts": [chat_history],
            "input_variables": ["input", "agent_scratchpad", "chat_history"]
        }
    )
    result = agent_chain.run(input=input)

    print(result)
    return jsonify({"status": "success", "message": result})

Running the Flask App

Finally, we add the standard boilerplate to run our Flask app.

if __name__ == '__main__':
    app.run()
Testing the Backend

Should everything goes well, here's our final backend code.

from flask import Flask, jsonify, request
from dotenv import load_dotenv
from langchain.chat_models import ChatAnthropic
from langchain.chains import ConversationChain
from langchain.agents import Tool
from langchain.agents import AgentType
from langchain.utilities import SerpAPIWrapper
from langchain.agents import initialize_agent
from langchain.memory import ConversationBufferMemory
from langchain.prompts.chat import (
    ChatPromptTemplate,
    MessagesPlaceholder,
    SystemMessagePromptTemplate,
    AIMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

load_dotenv()

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, World!'

@app.route('/ask', methods=['POST'])
def ask_assistant():
    data = request.get_json()
    if not data:
        return jsonify({"error": "No data provided"}), 400

    messages = data.get("message")
    llm = ChatAnthropic()

    input = ""
    message_list = []
    for message in messages:
        if message['role'] == 'user':
            message_list.append(
                HumanMessagePromptTemplate.from_template(message['content'])
            )
            input = message['content']
        elif message['role'] == 'assistant':
            message_list.append(
                AIMessagePromptTemplate.from_template(message['content'])
            )

    # Adding SystemMessagePromptTemplate at the beginning of the message_list
    message_list.insert(0, SystemMessagePromptTemplate.from_template(
        "The following is a friendly conversation between a human and an AI. The AI is talkative and "
        "provides lots of specific details from its context. The AI will respond with plain string, replace new lines with \\n which can be easily parsed and stored into JSON, and will try to keep the responses condensed, in as few lines as possible."
    ))

    message_list.insert(1, MessagesPlaceholder(variable_name="history"))

    message_list.insert(-1, HumanMessagePromptTemplate.from_template("{input}"))

    prompt = ChatPromptTemplate.from_messages(message_list)

    memory = ConversationBufferMemory(return_messages=True)
    conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
    result = conversation.predict(input=input)

    print(result)
    return jsonify({"status": "success", "message": result})

@app.route('/search', methods=['POST'])
def search_with_assistant():
    data = request.get_json()
    if not data:
        return jsonify({"error": "No data provided"}), 400

    messages = data.get("message")

    llm = ChatAnthropic()


    # Get the last message with 'user' role
    user_messages = [msg for msg in messages if msg['role'] == 'user']
    last_user_message = user_messages[-1] if user_messages else None

    # If there is no user message, return an error response
    if not last_user_message:
        return jsonify({"error": "No user message found"}), 400

    input = last_user_message['content']

    search = SerpAPIWrapper()
    tools = [
        Tool(
            name = "Current Search",
            func=search.run,
            description="useful for when you need to answer questions about current events or the current state of the world"
        ),
    ]
    chat_history = MessagesPlaceholder(variable_name="chat_history")
    memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
    agent_chain = initialize_agent(
        tools, 
        llm, 
        agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, 
        verbose=True, 
        memory=memory, 
        agent_kwargs = {
            "memory_prompts": [chat_history],
            "input_variables": ["input", "agent_scratchpad", "chat_history"]
        }
    )
    result = agent_chain.run(input=input)

    print(result)
    return jsonify({"status": "success", "message": result})


if __name__ == '__main__':
    app.run()

Now, let's test our app. Run the backend app with this command, also please make sure that our virtual environment is activated.

flask run

We'll know everything is going to be fine if our terminal returns this output.

the output of flask run command, showing the server running on localhost and port 5000

Without further ado, let's test our two endpoints, /ask and /search. To distinguish between the two, let's send each of them similar payload. Open your REST API testing and documentation software, or just use cURL. But in this tutorial, I'll use Insomnia.

Let's first call the /ask endpoint with the following payload. Let's say I've asked about the sequel of the video game "The Legend of Zelda: Breath of the Wild". Which it will answer incorrectly. This is to be expected as the model has training cutoff as of the end of 2021, in which there were no announcements about the sequel yet.

the result of the /ask endpoint. The answer is incorrect, which it answers with 'Breath of the Wild 2'

How about the /search endpoint? If you notice our code from earlier, this endpoint is handled with a more sophisticated chain, which utilise Agent. By using Agent, we can give the AI more power in decision making, by providing it with more tools than just its own model, which as we already demonstrated, has its own flaws.

To demonstrate how the /search endpoint work, let's call it with the same payload as before.

the result of the /search endpoint to our question, what is the sequel of Zelda: Breath of the Wild?

This time, it worked nicely! How does it work under the hood? let's look at the output in our terminal.

The AI agent at work, it started the 'chain' by forming a thought, considering using web search to support its answer. After that, it made the observation that the sequel of Breath of The Wild is indeed Tears of the Kingdom

Interestingly enough, this time the AI model didn't answer right away, but seemed to 'ponder' about what to do to answer. It decided to answer after it made the observation from the web search result that "Based on the news search, it looks like a sequel called The Legend of Zelda: Tears of the Kingdom was announced".

So, by that logic, shouldn't we just use the /search endpoint then? as it is more accurate, and therefore, smarter? Not quite. Normally, in a chatbot-based app, the bot is expected to retain the context of previous conversations, so that it can provide responses as if it 'remembers' the whole context of the conversation. Let's see if /search endpoint can do just that.

Let's test if the /search endpoint can recall what we just asked it.

the /search endpoint responded incorrectly, it seemed that it didn't remember the context of the previous conversation

This happened because while the LangChain library has memory chain feature which can be used to retain past conversations, web server, and the REST API service built on it is inherently stateless. Which means, the web server will treat each request as a new request.

But, haven't we included the previous conversations as payload? That's a good question. Turns out, the Agent chain used in LangChain currently doesn't support handling composed prompts, which consist of both requests and responses of both user and the AI. We mainly use this to provide example conversations for the model, further tuning the model to our desired response. The Agent, on the other hand, work by receiving a single instruction and develop its chains of thought around it. Which is why, no matter how long our conversations are, the agent will only respond to the latest request.

Let's test our /ask endpoint to notice the difference.

the /ask endpoint responded correctly, saying that we asked it about the sequel of Zelda: Breath of the Wild

This time, it answered using our past conversations as added context! By now we realized that we need both of the endpoints to build our AI Assistant app. But how do we incorporate both the outdated but always remember /ask endpoint with the forgetful but thorough and up-to-date /search endpoint? By building the front-end, of course!

Developing the Frontend

App.tsx

Let's navigate back to the ai-assistant-claude project directory. In this project, we'll start modifying our React components, beginning with the entry point file, App.tsx.

import React from 'react';
import logo from './logo.svg';
import './App.css';
import { ChatClient } from './ChatClient';

function App() {
  return (
    <div>
      <ChatClient/>
    </div>
  );
}

export default App;

In the above code snippet, we import the ChatClient component, which will be created in a subsequent step. We then include the <ChatClient/> component within a <div> element. This sets up the structure for our AI assistant app using React components.

ChatClient.tsx
import React, { useState } from 'react';
import { ChatInput } from './ChatInput';
import { ChatHistory } from './ChatHistory';

export interface Message {
    content: string;
    role: string;
}

export const ChatClient: React.FC = () => {
    const [messages, setMessages] = useState<Array<Message>>([]);
    const [isLoading, setIsLoading] = useState(false)

    const handleSimpleChat = (message: string) => {
        // Send the message and past chat history to the backend
        // Update messages state with the new message
        let newMessages = [...messages, { content: message, role: 'user' }]
        setMessages(newMessages);
        let postData = {
            message: newMessages
        }
        setIsLoading(true)
        fetch('/ask', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(postData),
        })
            .then((response) => response.json())
            .then((data) => {
                if (data.status === "success") {
                    setMessages([...newMessages, { content: data.message, role: 'assistant' }])

                }
                setIsLoading(false)
                console.log('Success:', data);
            })
            .catch((error) => {
                console.error('Error:', error);
                setIsLoading(false)
            });

    };

    const handleAdvancedChat = (message: string) => {
        // Trigger AI agent with Google Search functionality
        // Update messages state with the new message and AI response
        let newMessages = [...messages, { content: message, role: 'user' }]
        setMessages(newMessages);
        let postData = {
            message: newMessages
        }
        setIsLoading(true)
        fetch('/search', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(postData),
        })
            .then((response) => response.json())
            .then((data) => {
                if (data.status === "success") {
                    setMessages([...newMessages, { content: data.message, role: 'assistant' }])

                }
                console.log('Success:', data);
                setIsLoading(false)
            })
            .catch((error) => {
                console.error('Error:', error);
                setIsLoading(false)
            });
    };

    return (
        <div className="h-screen bg-gray-100 dark:bg-gray-900 flex items-center justify-center">
            <div className='flex flex-col items-center gap-2'>
                <h1 className='text-white text-xl'>AI Assistant with Claude and LangChain</h1>
                <div className="w-full max-w-md h-[80vh] bg-white dark:bg-gray-800 rounded-lg shadow-md overflow-hidden flex flex-col">
                    <ChatHistory messages={messages} isLoading={isLoading} />
                    <ChatInput onSimpleChat={handleSimpleChat} onAdvancedChat={handleAdvancedChat} />
                </div>
            </div>
        </div>
    );
};

This component serves as the primary user interface for our AI assistant. As shown, it incorporates a ChatHistory component to display the conversation history and a ChatInput component for inputting text. The component processes input from the ChatInput component, sends requests to the backend using this input, and then displays a loading status. If the request is successfully processed, the component will also show the response received from the backend.

ChatHistory.tsx
import React from 'react';
import { ReactMarkdown } from 'react-markdown/lib/react-markdown';
import { Message } from './ChatClient';


interface ChatHistoryProps {
  messages: Array<Message>;
  isLoading: boolean
}

export const ChatHistory: React.FC<ChatHistoryProps> = ({ messages, isLoading }) => {
  return (
<div className="p-4 h-full overflow-y-auto">
  {messages.map((message, index) => (
    <div
      key={index}
      className={`mb-3 ${
        message.role === 'user' ? 'text-right' : 'text-left'
      }`}
    >
      <ReactMarkdown
        className={`inline-block px-3 py-2 rounded-md ${
          index % 2 === 0 ? 'bg-gray-300 dark:bg-gray-700' : 'bg-blue-200 dark:bg-blue-900'
        }`}
      >
        {message.content}
      </ReactMarkdown>
    </div>
  ))}
        {isLoading && (
        <div className="mb-3 text-left">
          <ReactMarkdown
            className="inline-block px-3 py-2 rounded-md bg-blue-200 dark:bg-blue-900 animate-pulse"
          >
            {/* Put your desired loading content here */}
            Thinking...
          </ReactMarkdown>
        </div>
      )}

</div>

  );
};

Fortunately, TailwindCSS offers built-in utility classes for simple animations, such as animate-pulse. This class elegantly helps indicate that a request is awaiting a response. In this component, we also differentiate the positioning of messages from the "user" and the "assistant."

ChatInput.tsx
import React, { useState } from 'react';

interface ChatInputProps {
  onSimpleChat: (message: string) => void;
  onAdvancedChat: (message: string) => void;
}

export const ChatInput: React.FC<ChatInputProps> = ({ onSimpleChat, onAdvancedChat }) => {
  const [input, setInput] = useState('');

  const handleInputChange = (event: React.ChangeEvent<HTMLInputElement>) => {
    setInput(event.target.value);
  };

  const handleSubmit = (handler: (message: string) => void) => {
    handler(input);
    setInput('');
  };

  return (
    <div className="flex p-4 border-t border-gray-200 dark:border-gray-700">
      <input
        type="text"
        value={input}
        onChange={handleInputChange}
        placeholder="Type your message..."
        className="flex-grow px-3 py-2 rounded-md bg-gray-200 text-gray-900 dark:bg-gray-700 dark:text-gray-100 focus:outline-none"
      />
      <button
        onClick={() => handleSubmit(onSimpleChat)}
        className="ml-2 px-4 py-2 font-semibold text-gray-600 bg-white dark:text-gray-400 dark:bg-gray-800 border border-gray-300 rounded-md hover:bg-gray-200 dark:hover:bg-gray-700 focus:outline-none"
      >
        Ask
      </button>
      <button
        onClick={() => handleSubmit(onAdvancedChat)}
        className="ml-2 px-4 py-2 font-semibold text-white bg-blue-500 border border-blue-600 rounded-md hover:bg-blue-400 focus:outline-none"
      >
        Ask and Search
      </button>
    </div>
  );
};

Lastly, we added two buttons to our text input. The first button is used to send the input to the /ask endpoint, which processes the input using the AI model without any additional enhancements. This endpoint is context-aware. The second button, aptly named "Ask and Search," sends the input to the /search endpoint. In addition to processing the input via the AI model, this button also triggers an AI-driven web search if the situation requires it.

index.html
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <meta name="theme-color" content="#000000" />
    <meta
      name="description"
      content="Web site created using create-react-app"
    />
    <link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
    <!--
      manifest.json provides metadata used when your web app is installed on a
      user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/
    -->
    <link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
    <!--
      Notice the use of %PUBLIC_URL% in the tags above.
      It will be replaced with the URL of the `public` folder during the build.
      Only files inside the `public` folder can be referenced from the HTML.

      Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will
      work correctly both with client-side routing and a non-root public URL.
      Learn how to configure a non-root public URL by running `npm run build`.
    -->
--    <title>React App</title>
++    <title>Claude AI Assistant</title>
  </head>
  <body>
    <noscript>You need to enable JavaScript to run this app.</noscript>
    <div id="root"></div>
    <!--
      This HTML file is a template.
      If you open it directly in the browser, you will see an empty page.

      You can add webfonts, meta tags, or analytics to this file.
      The build step will place the bundled scripts into the <body> tag.

      To begin the development, run `npm start` or `yarn start`.
      To create a production bundle, use `npm run build` or `yarn build`.
    -->
  </body>
</html>

As a finishing touch, we update the title of our app in the index.html page by changing it from "React App" to "Claude AI Assistant."

package.json
{
  "name": "ai-assistant-claude",
  "version": "0.1.0",
  "private": true,
++  "proxy": "http://localhost:5000",
  "dependencies": {

Lastly, we add a proxy configuration to the package.json file and set it to http://localhost:5000. This helps us bypass CORS limitations arising from using different ports.

Testing the AI Assistant App

To begin testing, first ensure that the backend app (claude-langchain) is already running. Next, change the directory to the front-end app (ai-assistant-claude) and start the app using the following command:

npm start

The app may take a moment to build. Once it's ready, it will automatically open in your browser at localhost:3000.

The UI of our assistant app

Let's test both the context awareness and search feature! First, let's inquire about another video game whose release hasn't been announced yet in 2021 ā€“ the sequel to the latest Yakuza game by SEGA. Additionally, we'll ask if this game features the beloved character Kazuma Kiryu from previous games. To do this, click the "Ask" button.

The assistant app is thinking...

Give it a few seconds to think...

The AI answered incorrectly, saying that it is Yakuza: Like A Dragon which doesn't feature Kiryu

Surprisingly, the AI answered incorrectly. Yakuza: Like a Dragon is indeed the latest Yakuza game but stars a different protagonist, Ichiban Kasuga. Let's rephrase the question and use the "Ask and Search" button this time.

The request took longer, but was answered correctly

Now, the AI took longer to decide whether it needed to search the web. After determining that a web search was necessary and finding a satisfactory answer, it returned the information to the client/front-end.

The thought process of the AI: it decided to make a web search using the term "upcoming Yakuza games" and observed the result

This time, it returns the title "Like a Dragon Gaiden: The Man Who Erased His Name," which indeed features Kazuma Kiryu as the protagonist.

Fascinating, isn't it? The agent, powered by a Large Language Model, predicts what actions are necessary given the available resources (search) and provides a satisfactory answer. LangChain makes it easy to connect and "chain" the model, prompts (instructions to the model), and other features such as agents and tools.

Conclusion

In this tutorial, we've demonstrated the power of LangChain, particularly when combined with sophisticated language models like Anthropic's Claude. We highlighted the key features that make LangChain potent, including the ability to chain together common functionalities in AI-powered apps, such as prompt templates, models, memory, and agents. With this universal abstraction, it becomes both fast and effortless to integrate AI models from various providers, such as OpenAI's GPT and Anthropic's Claude.

Initially, we showcased how simple it is to create an AI assistant using default prompts by chaining conversation history, packaging it, and sending it to the model through our /ask endpoint. The demo then proceeded to exhibit the capabilities of AI agents in making their own decisions to accomplish specific goals. In our /search endpoint, we did precisely that ā€“ we posed questions to our assistant while providing a search tool to further explore the web for satisfactory answers.

Thank you for joining me on this journey. I hope you enjoyed building this app and learning about these technologies as much as I did writing this tutorial. For those interested in diving deeper, you can access the completed project for the front-end and backend. I look forward to seeing you in the next tutorial!

Discover tutorials with similar technologies

Upcoming AI Hackathons and Events