Cohere tutorial: Building a Simple Help Desk app For Superheroes
Introduction
Introducing the Cohere Platform
Cohere is a robust platform that provides access to state-of-the-art natural language processing models via a user-friendly API. This platform enables developers to seamlessly integrate a variety of natural language processing tasks into their applications, such as text classification, embeddings, and even text generation.
Beyond its standard offerings, Cohere also provides the ability to create custom models tailored to specific use cases. You can leverage your own training data and strategically dictate how this data should be utilized during the training process.
One of the stand-out features of Cohere is its playground – a space where you can explore and experiment with the various facets of the platform. Whether you're aiming to generate human-like text, classify text into predefined categories, or measure the semantic similarity between different pieces of text, the playground provides a conducive environment for experimentation and learning.
Cohere's capabilities make it an ideal tool for a wide array of applications. If you're building a chatbot, a content recommendation system, a text classification tool, or any application that requires understanding or generating text, Cohere can prove to be an invaluable asset.
Introduction to Chroma and Embeddings
Chroma is an open-source database specifically designed for the efficient storage and retrieval of embeddings, a crucial component in the development of AI-powered applications and services, particularly those utilizing Large Language Models (LLMs). Chroma's design is centered around simplicity and developer productivity, providing tools for storing and querying embeddings, as well as for embedding documents.
Developers can interact with Chroma through its Python client SDK, Javascript/Typescript client SDK, or a server application. The database can operate in-memory or in client/server mode, with additional support for notebook environments.
But what are embeddings?
In the realm of AI, and more specifically within machine learning and natural language processing, an 'embedding' is a representation of data in a vector space. Word embeddings, for example, represent words as high-dimensional vectors, with similar words occupying close proximity in this vector space. Embeddings are highly favored in machine learning models because they allow these models to understand the semantic content of data. In natural language processing, embeddings empower models to comprehend the meaning of words based on their context within a sentence or a document.
These embeddings are usually generated by training a model on a vast amount of data. The model learns to associate each piece of data (like a word) with a specific point in a high-dimensional space. Once the model is trained, it can generate an embedding for any given piece of data.
Chroma takes advantage of embeddings to represent documents or queries in a manner that encapsulates their semantic content. These embeddings can then be efficiently stored in the database and searched, providing a powerful tool for managing and leveraging high-dimensional data.
Prerequisites
- Basic knowledge of Python
- Access to Cohere API
- A Chroma database set up
Outline
- Initializing the Project
- Setting Up the Required Libraries
- Writing the Project Files
- Testing the Help Desk App
- Setting Up Chroma Database
- Testing the Help Desk App
Discussion
Initializing the Project
Having covered the introductions, it's time to delve into the practical part - let's start coding! Our project will be named chroma-cohere
. Open your preferred terminal, navigate to your development projects directory, and create a new directory for our project. Here's how to do it:
mkdir chroma-cohere
cd chroma-cohere
Next, we're going to create a new virtual environment specifically for this project. Creating and using virtual environments in Python development is considered a best practice. It isolates the dependencies of our current project from the global environment and from other Python projects, preventing any potential conflicts. Additionally, we can later freeze
the dependencies into a neat list and save it in a requirements.txt
file, which other developers can use to replicate our environment.
To create a virtual environment, use the following command:
python3 -m venv env
Once the virtual environment is created, we need to activate it. The process differs depending on your operating system:
- If you're using Windows, enter the following command in your terminal:
.\env\Scripts\activate
- If you're on Linux or MacOS, use this command:
source env/bin/activate
After running the appropriate command, you should see the name of your environment (in this case, env
) appear in parentheses at the start of your terminal prompt. This signifies that the virtual environment is activated and ready for use! It should look something like this:
Well done! You've set up and activated your virtual environment, laying a solid foundation for further project development!
Setting Up the Required Libraries
In this step, we will install all the libraries required by our project. Firstly, ensure that your virtual environment is activated. Once that's done, here's a quick rundown of the libraries we'll be installing:
cohere
library: We'll use the Cohere SDK to classify user input based on training (examples) data.chromadb
library: We'll use ChromaDB to store expansive training data and retrieve it based on semantic similarities with the user input.halo
library: Requests to Cohere's API will take a moment, and this library will provide an engaging loading indicator while the users wait for the response.
Let's proceed with installing these libraries. If you're still using Python2.8 like me, use pip3
. However, if you're already using Python3 by default, simply use pip
.
pip3 install chromadb
pip3 install cohere
pip3 install halo
Writing the Project Files
Time to get back to the code! Ensure that you are in the correct project directory and the virtual environment is active. Now, open up your preferred IDE or code editor and create a new file. Let's name it main.py
. As the name suggests, this will be the only Python file we'll be using throughout this tutorial.
main.py 🐍
Step 1. Import Necessary Libraries
# These are the import statements. You're importing several modules that your script will use.
import os
from dotenv import load_dotenv
import cohere
from cohere.responses.classify import Example
import pprint
from halo import Halo
from colorama import Fore, Style, init
load_dotenv() # This loads environment variables from a .env file, which is good for sensitive info like API keys
pp = pprint.PrettyPrinter(indent=4) # PrettyPrinter makes dictionary output easier to read
We start by importing the required libraries such as cohere
, halo
, os
, dotenv
, colorama
, and pprint
. Then, we load the environment variables stored in a .env
file, for storing sensitive information like API keys. Lastly, we initiate PrettyPrinter
from the pprint
library to make inspecting the response from the cohere
API more readable.
Step 2. Define Response Generation Function
def generate_response(messages):
spinner = Halo(text='Loading...', spinner='dots') # Creates a loading animation
spinner.start()
co = cohere.Client(os.getenv("COHERE_KEY")) # Initializes the Cohere API client with your API key
mood = get_mood_classification(messages, co)
department = get_department_classification(messages, co)
spinner.stop() # Stops the loading animation after receiving the response
mood_priority = {
'Despair': 1,
'Sorrowful': 2,
'Frustrated': 3,
'Anxious': 4,
'Irritated': 5,
'Neutral': 6,
'Satisfied': 7,
'Joyful': 8
}
# Prints the user's mood, its priority level, and the responsible department
print(
f"\n{Fore.CYAN}Question Received: {Fore.WHITE}{Style.BRIGHT}{messages}{Style.RESET_ALL}"
f"\n{Fore.GREEN}Mood Detected: {Fore.YELLOW}{Style.BRIGHT}{mood}{Style.RESET_ALL}"
f"\n{Fore.GREEN}Priority Level: {Fore.RED if mood_priority[mood] <= 2 else Fore.YELLOW if mood_priority[mood] <= 4 else Fore.CYAN}{Style.BRIGHT}{mood_priority[mood]}{Style.RESET_ALL}"
f"\n{Fore.GREEN}Department to handle your request: {Fore.MAGENTA}{Style.BRIGHT}{department}{Style.RESET_ALL}"
)
return messages, mood, department
This function receives user messages as input, generates a loading animation, and initializes the Cohere API client. It classifies the user's mood and the responsible department based on these messages, then stops the loading animation. It also maps moods to priority levels and prints the user's mood, its priority level, and the responsible department in a user-friendly format.
Step 3. Define the Classification Functions (Mood and Department in Charge)
def get_department_classification(messages, co):
department_examples = [
Example("How do I recharge my energy beam?", "Equipment Maintenance"),
Example("How can I manage the collateral damage from my powers?",
"City Relations"),
Example("Can you assist me with writing a speech for the mayor's ceremony?",
"Public Relations"),
Example("Is there a special way to activate my stealth mode?",
"Equipment Maintenance"),
Example("What are the current laws regarding secret identities?", "Legal"),
Example("There's a massive tornado headed for the city, what's our plan?",
"Emergency Response"),
Example("I pulled a muscle while lifting a car, what should I do?", "Medical"),
Example("My superhero suit is damaged, can you help me fix it?",
"Equipment Maintenance"),
Example(
"I need training for underwater missions, who can help me?", "Training"),
Example("What should I say to the press about my recent rescue operation?",
"Public Relations"),
Example("How do I handle paperwork for the arrested supervillain?", "Legal"),
Example(
"I'm feeling overwhelmed with this superhero life, what should I do?", "Mental Health"),
Example("How do I track the invisible villain?", "Intelligence"),
Example(
"I've been exposed to a new kind of radiation, do you have any info about this?", "Medical"),
Example("My communication device is not working, how do I fix it?",
"Equipment Maintenance"),
Example(
"How can I improve my relations with the local police department?", "City Relations"),
Example(
"My speed isn't improving, can you help me figure out a new training plan?", "Training"),
Example("What's our protocol for inter-dimensional threats?",
"Emergency Response"),
Example("A civilian saw me without my mask, what should I do?", "Legal"),
Example("How do I maintain my gear to ensure it doesn't fail during missions?",
"Equipment Maintenance"),
Example(
"I can't shake off the guilt after failing a mission, what should I do?", "Mental Health"),
Example(
"The villain seems to know my every move, do we have a mole?", "Intelligence"),
Example(
"I'm having nightmares about past battles, can someone help?", "Mental Health"),
Example("How can we predict the villain's next move?", "Intelligence"),
Example(
"I'm struggling to balance my civilian life and superhero duties, any advice?", "Mental Health")
]
department_response = co.classify(
model='large',
inputs=[messages],
examples=department_examples
) # Sends the classification request to the Cohere model
department = department_response.classifications[0].prediction # Extracts the prediction from the response
return department
def get_mood_classification(messages, co):
mood_examples = [
Example("How do I recharge my energy beam?", "Neutral"),
Example(
"How can I manage the collateral damage from my powers?", "Anxious"),
Example(
"Can you assist me with writing a speech for the mayor's ceremony?", "Joyful"),
Example("Is there a special way to activate my stealth mode?", "Neutral"),
Example("What are the current laws regarding secret identities?", "Neutral"),
Example(
"There's a massive tornado headed for the city, what's our plan?", "Anxious"),
Example(
"I pulled a muscle while lifting a car, what should I do?", "Sorrowful"),
Example("My superhero suit is damaged, can you help me fix it?", "Frustrated"),
Example(
"I need training for underwater missions, who can help me?", "Satisfied"),
Example(
"What should I say to the press about my recent rescue operation?", "Joyful"),
Example(
"How do I handle paperwork for the arrested supervillain?", "Irritated"),
Example(
"I'm feeling overwhelmed with this superhero life, what should I do?", "Despair"),
Example("How do I track the invisible villain?", "Frustrated"),
Example(
"I've been exposed to a new kind of radiation, do you have any info about this?", "Sorrowful"),
Example(
"My communication device is not working, how do I fix it?", "Irritated"),
Example(
"How can I improve my relations with the local police department?", "Satisfied"),
Example(
"My speed isn't improving, can you help me figure out a new training plan?", "Joyful"),
Example("What's our protocol for inter-dimensional threats?", "Despair"),
Example("A civilian saw me without my mask, what should I do?", "Anxious"),
Example(
"How do I maintain my gear to ensure it doesn't fail during missions?", "Neutral"),
Example(
"I can't shake off the guilt after failing a mission, what should I do?", "Sorrowful"),
Example(
"The villain seems to know my every move, do we have a mole?", "Frustrated"),
Example(
"I'm having nightmares about past battles, can someone help?", "Despair"),
Example("How can we predict the villain's next move?", "Anxious"),
Example(
"I'm struggling to balance my civilian life and superhero duties, any advice?", "Satisfied")
]
mood_response = co.classify(
model='large',
inputs=[messages],
examples=mood_examples
) # Sends the classification request to the Cohere model
mood = mood_response.classifications[0].prediction # Extracts the prediction from the response
return mood
The get_department_classification
and get_mood_classification
functions both classify user messages into categories - either department or mood. These functions use Example objects to classify the message, which are then sent as requests to the Cohere model. The model then returns a prediction based on the inputs and examples provided. Finally, the prediction is extracted from the response and returned by the function.
Step 4. Define the Project's Entrypoint
def main():
while True: # This infinite loop asks for user input and generates responses until the user types 'quit'.
input_text = input("You: ")
if input_text.lower() == "quit":
print("Goodbye!")
break # This breaks the infinite loop, ending the script.
response = generate_response(input_text)
if __name__ == "__main__": # Ensures that the main function only runs if this script is the main entry point. If this script is imported by another script, the main function won't automatically run.
main()
In the main
function, an infinite loop is initiated, which asks for user input and then generates a response using the generate_response
function. The loop will continue until the user enters 'quit', at which point the loop breaks and the script ends.
The if __name__ == "__main__"
statement checks if this script is being run directly or being imported. If it's being run directly, the main
function is called and executed.
.env 🌏
In our main.py
file, we included a function load_dotenv()
. This function loads environment variables that we store in a .env
file. This is the file where these variables are saved:
COHERE_KEY=xxxxxxxxxxxxxx
COHERE_MODEL_NAME=large
In this file, we have stored the API key (COHERE_KEY
) we use to make requests to the Cohere API, as well as the name (COHERE_MODEL_NAME
) of the model we use for our classification and embedding purposes.
Remember, it's important not to share the .env
file as it contains sensitive information such as your API key. You should add this file to your .gitignore
file if you're using Git for version control to ensure it isn't uploaded to your repository.
requirements.txt 📄
Finally, while this step might seem optional, it's good practice to create a requirements.txt
file as early as possible, especially during the initial phase of your project's development. This is crucial when you push your project to remote repositories such as Github, where others might discover and want to use or contribute to your project. The requirements.txt
file ensures that they can effortlessly replicate your development environment!
To create the requirements.txt
file, first make sure your current working directory is inside your project and that your virtual environment is activated. Then, run this command:
# "freeze" the dependencies and store the list into a file
pip freeze > requirements.txt
With this command, we effectively list all the Python dependencies required for your project in a single file. Now, anyone who wishes to install and run your project can simply install all the required dependencies by running:
pip install -r requirements.txt
This ensures that they have all the necessary Python packages installed. Consequently, it's a good practice to include this installation step in the README.md file of your project repository. This makes it easy for users to set up their environment correctly.
Testing the Help Desk App
It's time to put our Superhero Help Desk App to the test! You may have inferred the context of our help desk app from the department_examples
and mood_examples
. This app is designed to field inquiries from superheroes across the nation, classify the mood associated with the inquiries, and allocate them to the appropriate department for further handling. So, without further ado, let's launch our app with the following command:
python3 main.py
If everything is set up correctly, the terminal should present a prompt like this:
Now, it's your turn, superhero! Let's assume you're a superhero struggling with balancing your two identities. You could pose this question to the help desk: "I can't seem to separate my superhero persona from my real life! What should I do?" After typing your question, hit Enter
.
Assuming that the COHERE_KEY
is properly configured in the .env
file, a loading screen should briefly appear.
Voila! The help desk has successfully detected our mood as "Despair" and directed our query to the Mental Health department. It turns out that even heroes need a helping hand from time to time!
Feel free to experiment with more queries! Here are a few examples I tried:
- "I accidentally hit a civilian during a mission, what should I do?"
- "I've seen some staff members that I don't recognize, should we worry about potential informants?"
- "My double jump is not as powerful as it used to be, what should I do about this?"
Interestingly, while the help desk mostly succeeds in detecting the mood of the queries, it assigned the last question about double jumping to the Equipment Maintenance department. Perhaps it made an assumption that the double jump is facilitated by some kind of equipment rather than being a physical skill?
This apparent misclassification likely arises from the limited number of training examples provided. This is to be expected, as I intentionally restricted the training data for illustrative purposes. However, a well-functioning help desk should be supported by a substantial number of example queries and should also have the ability to learn from previous responses. In this repository, I've already prepared some example data in CSV files. But how do we persistently store and retrieve new examples? This is where an embedding database like Chroma comes into play!
Setting Up Chroma Database
To solve our problem, we will be using ChromaDB! let's begin by importing chromadb
which we've already installed in the beginning of this tutorial. So, let's begin by adding some imports necessary to get our embedding storing and retrieving business going!
from halo import Halo
from colorama import Fore, Style, init
++ import chromadb
++ from chromadb.config import Settings
++ from chromadb.utils import embedding_functions
++ import pandas as pd
These imports bring all the required libraries such as chromadb
and the config as well as the embedding functions, along with pandas
which we will use to process the training data in our CSV files. The files in question are available in my Github repository.
Afterwards, still in the global environment, we initiate the database as well as the embedding functions. As we will use the Chroma collection in some of our functions, we will initiate and populate the collection in the global environment.
++ # Initializes the ChromaDB client with certain settings. These settings specify that the client should use DuckDB with Parquet for storage,
++ # and it should store its data in a directory named 'database'.
++ chroma_client = chromadb.Client(Settings(
++ chroma_db_impl="duckdb+parquet",
++ persist_directory="database"
++ ))
++
++ # Initializes a CohereEmbeddingFunction, which is a specific function that generates embeddings using the Cohere model.
++ # These embeddings will be used to add and retrieve examples in the ChromaDB database.
++ cohere_ef = embedding_functions.CohereEmbeddingFunction(
++ api_key=COHERE_KEY, model_name=os.getenv('COHERE_MODEL_NAME')
++ )
++
++ # Gets or creates a ChromaDB collection named 'help_desk', using the Cohere embedding function.
++ example_collection = chroma_client.get_or_create_collection(
++ name="help_desk", embedding_function=cohere_ef)
++
++ # Reads the CSV data into pandas DataFrames.
++ df_department = pd.read_csv('training_data_department.csv')
++ df_mood = pd.read_csv('training_data_mood.csv')
++
++ # Converts the DataFrames to lists of dictionaries.
++ department_dict = df_department.to_dict('records')
++ mood_dict = df_mood.to_dict('records')
++
++ # If the number of examples in the collection is less than the number of examples in the department data,
++ # adds the examples to the collection.
++ if example_collection.count() < len(department_dict):
++ for id, item in enumerate(department_dict):
++ index = example_collection.count() if example_collection.count() is not None else 0
++ example_collection.add(
++ documents=[item['text']],
++ metadatas=[{"department": item['label'],
++ "mood": mood_dict[id]['label']}],
++ ids=[f"id_{index}"]
++ )
Next, we will replace the hard-coded examples in both our get_department_classification()
and get_mood_classification()
functions with query results from the Chroma collection that we created and populated earlier.
def get_department_classification(messages, co):
-- department_examples = [
-- Example("How do I recharge my energy beam?", "Equipment Maintenance"),
-- Example("How can I manage the collateral damage from my powers?",
-- "City Relations"),
-- Example("Can you assist me with writing a speech for the mayor's ceremony?",
-- "Public Relations"),
-- Example("Is there a special way to activate my stealth mode?",
-- "Equipment Maintenance"),
-- Example("What are the current laws regarding secret identities?", "Legal"),
-- Example("There's a massive tornado headed for the city, what's our plan?",
-- "Emergency Response"),
-- Example("I pulled a muscle while lifting a car, what should I do?", "Medical"),
-- Example("My superhero suit is damaged, can you help me fix it?",
-- "Equipment Maintenance"),
-- Example(
-- "I need training for underwater missions, who can help me?", "Training"),
-- Example("What should I say to the press about my recent rescue operation?",
-- "Public Relations"),
-- Example("How do I handle paperwork for the arrested supervillain?", "Legal"),
-- Example(
-- "I'm feeling overwhelmed with this superhero life, what should I do?", "Mental Health"),
-- Example("How do I track the invisible villain?", "Intelligence"),
-- Example(
-- "I've been exposed to a new kind of radiation, do you have any info about this?", "Medical"),
-- Example("My communication device is not working, how do I fix it?",
-- "Equipment Maintenance"),
-- Example(
-- "How can I improve my relations with the local police department?", "City Relations"),
-- Example(
-- "My speed isn't improving, can you help me figure out a new training plan?", "Training"),
-- Example("What's our protocol for inter-dimensional threats?",
-- "Emergency Response"),
-- Example("A civilian saw me without my mask, what should I do?", "Legal"),
-- Example("How do I maintain my gear to ensure it doesn't fail during missions?",
-- "Equipment Maintenance"),
-- Example(
-- "I can't shake off the guilt after failing a mission, what should I do?", "Mental Health"),
-- Example(
-- "The villain seems to know my every move, do we have a mole?", "Intelligence"),
-- Example(
-- "I'm having nightmares about past battles, can someone help?", "Mental Health"),
-- Example("How can we predict the villain's next move?", "Intelligence"),
-- Example(
-- "I'm struggling to balance my civilian life and superhero duties, any advice?", "Mental Health")
-- ]
++ department_examples = []
++ results = example_collection.query(
++ query_texts=[messages],
++ n_results=90
++ )
++
++ for doc, md in zip(results['documents'][0], results['metadatas'][0]):
++ department_examples.append(Example(doc, md['department']))
department_response = co.classify(
-- model='large',
++ model=os.getenv("COHERE_MODEL_NAME"),
inputs=[messages],
examples=department_examples
) # Sends the classification request to the Cohere model
The changes made to both functions follow a similar pattern: we've replaced the hard-coded list of examples with a query to our database, and subsequently parsed the query results into Example() objects. The choice to set n_results=90
wasn't arbitrary; I wanted to ensure the sample size was robust and representative of the overall dataset. While there's no hard and fast rule for this selection, 90 examples struck a good balance between computational efficiency and model performance for this particular application.
def get_mood_classification(messages, co):
-- mood_examples = [
-- Example("How do I recharge my energy beam?", "Neutral"),
-- Example(
-- "How can I manage the collateral damage from my powers?", "Anxious"),
-- Example(
-- "Can you assist me with writing a speech for the mayor's ceremony?", "Joyful"),
-- Example("Is there a special way to activate my stealth mode?", "Neutral"),
-- Example("What are the current laws regarding secret identities?", "Neutral"),
-- Example(
-- "There's a massive tornado headed for the city, what's our plan?", "Anxious"),
-- Example(
-- "I pulled a muscle while lifting a car, what should I do?", "Sorrowful"),
-- Example("My superhero suit is damaged, can you help me fix it?", "Frustrated"),
-- Example(
-- "I need training for underwater missions, who can help me?", "Satisfied"),
-- Example(
-- "What should I say to the press about my recent rescue operation?", "Joyful"),
-- Example(
-- "How do I handle paperwork for the arrested supervillain?", "Irritated"),
-- Example(
-- "I'm feeling overwhelmed with this superhero life, what should I do?", "Despair"),
-- Example("How do I track the invisible villain?", "Frustrated"),
-- Example(
-- "I've been exposed to a new kind of radiation, do you have any info about this?", "Sorrowful"),
-- Example(
-- "My communication device is not working, how do I fix it?", "Irritated"),
-- Example(
-- "How can I improve my relations with the local police department?", "Satisfied"),
-- Example(
-- "My speed isn't improving, can you help me figure out a new training plan?", "Joyful"),
-- Example("What's our protocol for inter-dimensional threats?", "Despair"),
-- Example("A civilian saw me without my mask, what should I do?", "Anxious"),
-- Example(
-- "How do I maintain my gear to ensure it doesn't fail during missions?", "Neutral"),
-- Example(
-- "I can't shake off the guilt after failing a mission, what should I do?", "Sorrowful"),
-- Example(
-- "The villain seems to know my every move, do we have a mole?", "Frustrated"),
-- Example(
-- "I'm having nightmares about past battles, can someone help?", "Despair"),
-- Example("How can we predict the villain's next move?", "Anxious"),
-- Example(
-- "I'm struggling to balance my civilian life and superhero duties, any advice?", "Satisfied")
-- ]
--
++ mood_examples = []
++ results = example_collection.query(
++ query_texts=[messages],
++ n_results=90
++ )
++
++ for doc, md in zip(results['documents'][0], results['metadatas'][0]):
++ mood_examples.append(Example(doc, md['mood']))
++
mood_response = co.classify(
model=os.getenv("COHERE_MODEL_NAME"),
inputs=[messages],
examples=mood_examples
) # Sends the classification request to the Cohere model
mood = mood_response.classifications[0].prediction # Extracts the prediction from the response
return mood
Finally, we add code to store the message sent by the user as well as responses (both department and mood) to the collection, further augmenting the examples that the help desk app can use! Note that we're not saving the 'quit' input in this process.
if input_text.lower() == "quit":
print("Goodbye!")
break # This breaks the infinite loop, ending the script.
response, mood, department = generate_response(input_text)
++ # Adds the response to the ChromaDB collection.
++ index = example_collection.count() if example_collection.count() is not None else 0
++ example_collection.add(
++ documents=[response],
++ metadatas=[{"department": department,
++ "mood": mood}],
++ ids=[f"id_{index}"]
++ )
Testing the Help Desk App with ChromaDB-powered Examples
Finally, it's time to test our revised help desk app with ChromaDB-powered examples! Remember the last question we asked the app about "double jump"? Let's ask the app that again! As the app initializes the Chroma collection and populates it with example data from our CSV files, the start-up process may take a bit longer than usual.
If things go well, we should see the same input prompt that was shown in our previous test. Let's test the application with a previously problematic query: "My double jump is getting weaker these days, what should I do about this?".
Success! The app correctly assigned the enquiry and stored the response in the collection for future reference. This time, it recognized 'double jump' as a physical ability and assigned it to the Training department correctly."
Conclusion
Throughout this tutorial, we explored the powerful capabilities of the Cohere platform. This language understanding model enables efficient and affordable processing, providing functionalities that can deeply understand and interpret semantics in language.
We showcased the classify
endpoint within our Superhero Help Desk app, demonstrating its ability to deliver accurate classifications. While we utilized 90 examples for illustration, it's important to remember that the model's effectiveness isn't limited to this number. You have the flexibility to adapt the model according to your specific needs.
One of the main highlights was the integration of Chroma database, which empowers the app with a dynamic learning capability. Not only can the database expand with initially available data, it can also grow through user interactions. This means that the Help Desk app learns and evolves with each use, offering increasingly accurate responses.
Thank you for joining me on this journey. I hope you've had as much fun building this AI app and learning about these technologies as I had writing this tutorial. For those interested in diving deeper, you can access the completed project on my Github repository. I look forward to seeing you in the next tutorial! And in a meantime visit other our AI tutorials