Mastering AI with Upstage Solar LLM: From Use Cases to Agent

Thursday, September 12, 2024 by TommyA
Mastering AI with Upstage Solar LLM: From Use Cases to Agent

Mastering AI with Upstage Solar LLM: From Use Cases to Agent Integration

Introduction

Hello!šŸ‘‹šŸ½ I'm Tommy, and today, weā€™re diving into the dynamic world of Upstage Solar LLM ā€” a powerful suite of AI models designed to elevate your applications to new heights. In this guide, we'll uncover the unique capabilities of Solar LLM, a collection of advanced language models that bring efficiency, multilingual support, and factual accuracy to your AI projects.

Whether youā€™re creating an intelligent kitchen assistant, moderating multilingual content on social media, or building a context-aware customer support bot, this tutorial will provide you with the know-how to leverage Solar LLM's strengths to their fullest potential. Stick around to see how these models can transform your applications with practical, real-world use cases with hands-on implementation in Google Colab at the end! šŸš€

Upstage Solar LLM Models Overview

Upstage Solar LLM is more than just a collection of language modelsā€”it's a powerful suite of tools designed to bring AI-driven applications to life with efficiency and precision. The Solar LLM models are tailored for various tasks, from engaging in natural language conversations to performing complex translations, content moderation, and more. Additionally, Solar LLM offers advanced text embedding capabilities, making it a comprehensive solution for all your AI needs.

Core Models in Solar LLM:

  • solar-1-mini-chat: A compact, multilingual chat model designed for dynamic and context-aware conversations, perfect for building interactive chatbots.
  • solar-1-mini-translate-koen: A specialized model for real-time translation between Korean and English, ideal for multilingual communication.
  • solar-1-mini-groundedness-check: Ensures that AI-generated responses are accurate and contextually appropriate, minimizing errors and misinformation.
  • Solar Embeddings API: Converts text into numerical representations (embeddings) that are easy for computers to process. This API includes:
    • solar-embedding-1-large-query: Optimized for embedding user queries to enhance search accuracy.
    • solar-embedding-1-large-passage: Designed for embedding documents, making it easier to retrieve relevant information when users perform searches.

These models work together to offer a robust AI toolkit that can handle everything from real-time conversations to advanced text processing tasks.

Why Use Solar LLM?

Choosing Solar LLM means opting for a suite of AI models that are not only powerful but also versatile, catering to a wide range of applications. Hereā€™s why Solar LLM stands out:

  1. Efficiency and Performance: Solar LLM models are designed to be lightweight without sacrificing power. This makes them perfect for real-time applications where speed and resource efficiency are crucial.
  2. Multilingual Capabilities: With specialized models like solar-1-mini-translate-koen, Solar LLM excels in handling and translating content across multiple languages, making it an excellent choice for global applications.
  3. Dynamic Function Integration: The ability of Solar LLM to call external functions dynamically allows for the creation of responsive, interactive AI applications. This is particularly useful for tasks like real-time recommendations or data retrieval.
  4. Groundedness Check: This feature ensures that all responses generated by Solar LLM are factually correct and relevant to the context, which is critical for applications where accuracy is paramount, such as customer support or healthcare.
  5. Advanced Text Embeddings: The Solar Embeddings API adds another layer of functionality by converting text into numerical embeddings that machines can easily process. Whether youā€™re building a search engine or a retrieval system, Solar LLMā€™s dual embedding models (for user queries and document passages) enhance the efficiency and accuracy of text processing tasks, ensuring that relevant information is always within reach.
  6. Developer-Friendly: Solar LLM is designed with developers in mind, offering straightforward APIs and excellent documentation. This makes it easy to integrate these powerful models into your existing projects or start new ones with minimal friction.

Setup and Dependencies

Before we dive into the use cases, let's make sure your environment is ready for testing the Solar LLM models. I used Google Colab to run my examples, but you can also execute them in any Python environment with a few adjustments.

Dependencies to Install

To get started, you'll need to install the necessary libraries. If you are using Google Colab, run the following command:

!pip install -qqq langchain-upstage langchain 

If you're running the code in your local Python environment, remove the exclamation mark:

!pip install -qqq langchain-upstage langchain 

Initializing the Upstage API Key

To use the Solar LLM models, you need to initialize your Upstage API key. In Google Colab, you can do this by running:

from google.colab import userdata
import os

# Initialize Upstage API Key
os.environ["UPSTAGE_API_KEY"] = userdata.get('UPSTAGE_API_KEY')

This code fetches your API key securely from Google Colab's user data.

For those running the code in a local Python environment, you can use the python-dotenv library to set up your environment variables or directly set the API key as a string:

  1. Using python-dotenv:
    Install the library using:
pip install python-dotenv

Create a .env file in your project directory and add:

UPSTAGE_API_KEY=your_api_key_here 

Then, in your Python script, add:

from dotenv import load_dotenv
    import os

    load_dotenv()
    api_key = os.getenv('UPSTAGE_API_KEY')
  1. Directly in your script:
import os

    # Directly setting the API key
    os.environ["UPSTAGE_API_KEY"] = "your_api_key_here"

Practical Use Cases for Solar LLM

Now that your environment is set up, let's explore some practical and easily relatable use cases for Solar LLM models. These examples showcase how Solar's unique capabilities can solve everyday problems, making AI integration seamless and efficient.

Use Case 1: Multilingual Content Moderation for Social Media

Objective: Use Solar LLMs translation and moderation capabilities to automatically manage user-generated content on a multilingual (Korean) social media platform, ensuring community guidelines are upheld.

Implementation:

from langchain_upstage import ChatUpstage
from langchain_core.messages import HumanMessage, AIMessage

# Initialize the translation model for English to Korean translations
chat = ChatUpstage(model="solar-1-mini-translate-koen")

# Define a message moderation scenario
messages = [
    HumanMessage(content="ģ“ź²ƒģ€ ģ¹œź·¼ķ•œ ėŒ“źø€ģž…ė‹ˆė‹¤."),  # "This is a friendly comment." (Expected to pass)
    HumanMessage(content="ė‚œ ģ“ ź²Œģ‹œė¬¼ģ“ ģ‹«ģ–“! ė„ˆķ¬ė“¤ ėŖØė‘ ķ‹€ė øģ–“!"),  # "I hate this post! You are all wrong." (Likely to be flagged)
]

# Translate messages into English language for moderation
translated_responses = [chat.invoke([message]) for message in messages]

# Mock content moderation function
def moderate_content(message):
    # List of flagged words in English
    flagged_words = ["hate", "wrong"]
    if any(word in message.content.lower() for word in flagged_words):
        return "Content flagged for review"
    return "Content approved"

# Moderate each translated message
for response in translated_responses:
    moderation_result = moderate_content(response)
    print(moderation_result)

After running the code block above it gave the expected output and flagged the second message.

usecase 1 reponse
Usecase 1 output

Explanation:

This use case shows how Solarā€™s translation capabilities can be leveraged for content moderation. The system translates user-generated content in real-time and checks for offensive or inappropriate language, ensuring that a positive environment is maintained on social media platforms.

Use Case 2: Context-Aware Customer Support Chatbot

Objective: Build a customer support chatbot that handles user queries and ensures that responses are factually correct by validating them with Solar's groundedness check model.

Implementation:

from langchain_upstage import ChatUpstage, UpstageGroundednessCheck
from langchain_core.messages import HumanMessage, SystemMessage

# Initialize models
chat_model = ChatUpstage(model="solar-1-mini-chat")
groundedness_checker = UpstageGroundednessCheck()

def customer_support(messages):
    # Invoke the chat model to get a response
    response = chat_model.invoke(messages)
    
    # Check if the response is grounded in the given context
    check_result = groundedness_checker.invoke({
        "context": messages[1].content, 
        "answer": response.content
    })
    return response.content if check_result == "grounded" else "Response needs review."

# Example conversation
messages = [
    SystemMessage(content="You are a customer support assistant."),
    HumanMessage(content="How can I reset my password?")
]
print(customer_support(messages))

How the Groundedness Check Works:

The groundedness check in Solar LLM plays a crucial role in maintaining the accuracy and reliability of the chatbot's responses. In this use case:

  • The chat model generates a response to a user's query (e.g., "How can I reset my password?").
  • The groundedness check model then verifies if the generated response is factually correct and relevant to the user's question.

Response after running that code block above

usecase 2 reponse
Usecase 2 output

For example, if the chatbot response is, "I kick the ball," which clearly does not relate to the user's query about resetting a password, the groundedness check model will flag this response with "Response needs review." This mechanism ensures that all responses are contextually appropriate and aligned with the user's expectations, making the chatbot more reliable and trustworthy.

Why This Matters:

This feature is essential in applications where factual correctness is critical, such as customer support, healthcare, or financial advice. By using the groundedness check, Solar LLM minimizes the risk of providing misleading or incorrect information, ensuring a better user experience and maintaining trust in AI-driven solutions.

Use Case 3: Dynamic Recipe Recommendation Based on Ingredients

Objective: Create a smart kitchen assistant that dynamically suggests recipes based on the ingredients available at home, leveraging Solar LLMā€™s function-calling capabilities to fetch relevant recipe options in real-time.

Implementation:

from langchain_upstage import ChatUpstage
from langchain.tools import tool

# Define a custom function to recommend recipes based on ingredients
@tool
def recommend_recipe(ingredients):
    """Recommend recipes based on the given ingredients."""
    # Mock recipe database
    recipe_database = {
        "pasta": ["Spaghetti Carbonara", "Penne Arrabbiata"],
        "chicken": ["Chicken Alfredo", "Grilled Chicken Salad"],
        "rice": ["Fried Rice", "Rice Pudding"]
    }

    # Match ingredients to recipes
    matched_recipes = [recipe for ingredient in ingredients if ingredient in recipe_database for recipe in recipe_database[ingredient]]
    return f"Based on the ingredients, you can make: {', '.join(matched_recipes) if matched_recipes else 'No recipes found with the given ingredients.'}"

# Set up Solar LLM with tools
available_functions = {"recommend_recipe": recommend_recipe}
llm = ChatUpstage()
tools = [recommend_recipe]
llm_with_tools = llm.bind_tools(tools)

# Example: User asks for recipe suggestions
messages = [{"role": "user", "content": "What can I cook with chicken and pasta?"}]
response = llm_with_tools.invoke(messages)

if response.tool_calls:
    tool_call = response.tool_calls[0]
    function_name = tool_call["name"]
    function_to_call = available_functions[function_name]
    function_args = tool_call["args"]
    function_response = function_to_call.invoke(function_args)

    print(function_response)

Explanation:

In this example, Solar LLM utilizes its function-calling capability to create a dynamic recipe suggestion system. When the user asks, "What can I cook with chicken and pasta?", the model recognizes that it needs to call the recommend_recipe function to provide an appropriate answer.

  • Custom Recipe Function: The recommend_recipe function checks the mock recipe database for matches based on the provided ingredients (chicken and pasta). It finds relevant recipes associated with each ingredient:
    • For pasta: "Spaghetti Carbonara," "Penne Arrabbiata"
    • For chicken: "Chicken Alfredo," "Grilled Chicken Salad"
  • Dynamic Integration with Solar LLM: The function returns a combined list of recipes that can be made with the user's ingredients, and Solar LLM dynamically integrates this list into its response.

Why This Is Useful:

This use case demonstrates how Solar LLM can leverage external functions to provide dynamic and personalized content, making it ideal for smart kitchen assistants, cooking apps, or any application that requires real-time data integration and recommendations.

By combining multiple ingredients and fetching the corresponding recipes from a predefined database, Solar LLM enables a more tailored user experience, offering practical and actionable suggestions that users can rely on.

Integrating Solar LLM into an AI Agent

Now that weā€™ve explored some practical use cases for Solar LLM, letā€™s integrate this powerful language model into an AI agent. By doing so, the agent can utilize Solar LLM's advanced capabilities to perform various tasks more effectively.

Step 1: Initialize the Solar LLM

Start by initializing the Solar LLM model that you want your agent to use. In this example, we'll use the solar-1-mini-chat model, which is well-suited for dynamic, context-aware conversations.

from langchain_upstage import ChatUpstage

# Initialize the Solar LLM model for chat
upstage_chat_llm = ChatUpstage(model="solar-1-mini-chat")

This sets up the solar-1-mini-chat model, ready to be used by the agent.

Step 2: Create an AI Agent Using Solar LLM

Next, define an agent with the crewai library and pass the initialized Solar LLM model to it. This enables the agent to leverage Solar LLM's capabilities for its defined role.

from crewai import Agent
from textwrap import dedent

# Create an agent and assign the Solar LLM to it
content_agent = Agent(
    role="Content Creator",
    goal="Create quality content on {topic} passed for a blog",
    backstory="You are an experienced content creator for a renowned blog company.",
    allow_delegation=False,
    llm=upstage_chat_llm
)

Explanation:

  • Role and Goal: The agent is defined with a specific role ("Content Creator") and a clear goal ("Create quality content on {topic} for a blog").
  • Backstory: This provides context for the agent's tasks, ensuring content aligns with the persona of an "experienced content creator for a renowned blog company."
  • LLM Assignment: The llm parameter is set to the upstage_chat_llm model, allowing the agent to utilize the Solar LLM for generating content or handling tasks.

View the Google Colab used for this tutorial here.

Next Steps

Now that you've seen how to integrate Solar LLM with an AI agent, here are the next steps to expand your knowledge and capabilities:

  1. Experiment with Different Models: Explore other Solar LLM models, such as solar-1-mini-translate-koen for multilingual translation or solar-1-mini-groundedness-check for ensuring factual correctness in generated content. This will help you understand which models work best for different use cases.
  2. Build Custom Functions: Create custom functions that can be dynamically called by Solar LLM. This could include integrating databases, external APIs, or your own logic to enhance the responsiveness and capability of your AI applications.
  3. Optimize Performance with Embeddings: Utilize the Solar Embeddings API to improve information retrieval tasks, like building a search engine or a recommendation system. Experiment with solar-embedding-1-large-query for user queries and solar-embedding-1-large-passage for document embedding to see how embeddings can improve text matching and relevance.
  4. Expand Your Projects: Start applying Solar LLM and agent integrations in real-world applications, such as customer support systems, content creation tools, and dynamic recommendation engines. Test different configurations and see how Solar LLM can add value to your existing or new projects.

Conclusion

In this tutorial, we've explored the versatile capabilities of Upstage Solar LLM, from practical use cases like dynamic recipe recommendations, multilingual content moderation, and context-aware customer support chatbots to integrating Solar LLM with an AI agent for more sophisticated applications.

We've seen how Solar LLM models, like solar-1-mini-chat, solar-1-mini-translate-koen, and solar-1-mini-groundedness-check, can help create smarter, more dynamic AI solutions by providing efficient, multilingual, and accurate language processing. We also highlighted the unique power of the Solar Embeddings API to enhance tasks like search and retrieval, offering a full spectrum of tools to take your AI projects to the next level.

For more information and a quick start guide to getting hands-on with Solar LLM, check out the Solar LLM Quick Start Page. Happy coding! šŸš€

Upcoming AI Hackathons and Events