Tutorial Image: LangGraph Tutorial: Building a Tool-Enabled Conversational Agent - Unit 2.1 Exercise 5

LangGraph Tutorial: Building a Tool-Enabled Conversational Agent - Unit 2.1 Exercise 5

Discover how to create a conversational agent that combines AI-driven decision-making with real-world tools for dynamic, context-aware interactions.

๐ŸŽฏ What You'll Learn Today

LangGraph Tutorial: Building a Tool-Enabled Conversational Agent - Unit 2.1 Exercise 5

This tutorial is also available in Google Colab here or for download here

Joint Initiative: This tutorial is part of a collaboration between AI Product Engineer and the Nebius Academy.

This tutorial guides you through building a complete tool-enabled conversational agent using LangGraph and LangChain. The system demonstrates proper state management, tool integration, and conversation flow control.

Key Concepts Covered

  1. State Management in Conversational AI
  2. Tool Integration and Execution
  3. Message Type Handling
  4. Graph-based Conversation Flow
  5. Error Handling in AI Systems
import os
import uuid
from typing import Annotated, Any, TypedDict
!pip install langchain-core
!pip install langgraph
!pip install langchain-community
from langchain_community.tools import TavilySearchResults
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, ToolMessage
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages

Step 1: Environment and State Setup

We begin with environment configuration and state definition.

Why This Matters

Proper setup is crucial because

  1. Ensures consistent tool access
  2. Enables clean state management
  3. Provides type safety
  4. Facilitates debugging

Debug Tips

  1. Environment Setup:

    • Verify API key presence
    • Check environment variables
    • Monitor tool initialization
    • Test state structure
Configure Tavily API key

os.environ["TAVILY_API_KEY"] = "your-tavily-api-key-here"


class State(TypedDict):
    """Defines the conversation state structure.

    This state implementation tracks:
    1. Message history with proper annotation
    2. Tool call specifications
    3. Tool execution results

    Attributes:
        messages: Conversation history with add_messages annotation
        tool_calls: Pending tool executions
        tool_outputs: Results from tool operations
    """

    messages: Annotated[list[BaseMessage], add_messages]
    tool_calls: list[dict]
    tool_outputs: list[Any]

Step 2: LLM Node Implementation

We implement the core decision-making logic.

Why This Matters

LLM node implementation is crucial because

  1. Controls conversation flow
  2. Makes tool usage decisions
  3. Manages message generation
  4. Handles conversation state

Debug Tips

  1. LLM Node Behavior:

    • Monitor decision points
    • Track state changes
    • Verify message handling
    • Check tool call generation
def llm_node(state: State) -> State:
    """Simulates LLM decision-making in conversation.

    This function demonstrates:
    1. Initial state handling
    2. Message analysis
    3. Tool call decisions
    4. Response generation

    Args:
        state: Current conversation state

    Returns:
        Updated state with new messages/tool calls
    """
    if not state.get("messages"):
        return {
            "messages": [HumanMessage(content="What is the capital of France?")],
            "tool_calls": [],
            "tool_outputs": [],
        }

    last_message = state["messages"][-1].content

    if (
        isinstance(state["messages"][-1], HumanMessage)
        and "capital of France" in last_message
    ):
        return {
            "messages": [
                AIMessage(
                    content="Let me search for information about the capital of France."
                )
            ],
            "tool_calls": [
                {
                    "tool_name": "TavilySearchResults",
                    "args": {"query": "capital of France"},
                    "id": str(uuid.uuid4()),
                }
            ],
        }

    return {
        "messages": [AIMessage(content="I hope that information was helpful!")],
        "tool_calls": [],
        "tool_outputs": [],
    }

Step 3: Tool Execution Implementation

We implement tool execution and result handling.

Why This Matters

Tool execution is crucial because

  1. Provides external information
  2. Handles API interactions
  3. Manages error cases
  4. Structures responses

Debug Tips

  1. Tool Execution:

    • Monitor API calls
    • Track error handling
    • Verify output format
    • Check state updates
def tool_executor(state: State) -> State:
    """Executes tools and processes results.

    Args:
        state: Current conversation state

    Returns:
        Updated state with tool results
    """
    if not state.get("tool_calls"):
        return {"tool_outputs": []}

    tool_call = state["tool_calls"][-1]
    tool_call_id = tool_call.get("id", str(uuid.uuid4()))
    tool_call["id"] = tool_call_id
    tavily_tool = TavilySearchResults()

    try:
        if tool_call["tool_name"] == "TavilySearchResults":
            output = tavily_tool.invoke(tool_call["args"])
            if output:
                return {
                    "tool_outputs": [
                        {
                            "content": "Based on the search results, Paris is the capital of France. "
                            "It is the country's largest city and a major European cultural "
                            "and economic center.",
                            "tool_call_id": tool_call_id,
                            "tool_name": "TavilySearchResults",
                        }
                    ]
                }
            return {
                "tool_outputs": [
                    {
                        "content": "No relevant information found.",
                        "tool_call_id": tool_call_id,
                        "tool_name": "TavilySearchResults",
                    }
                ]
            }
    except Exception as e:
        return {
            "tool_outputs": [
                {
                    "content": f"I encountered an error while searching: {e!s}",
                    "tool_call_id": tool_call_id,
                    "tool_name": "TavilySearchResults",
                }
            ]
        }

    return {"tool_outputs": []}

Step 4: Result Processing Implementation

We implement result processing and message generation.

Why This Matters

Result processing is crucial because

  1. Formats responses consistently
  2. Maintains message type structure
  3. Enables proper conversation flow
  4. Supports error handling
def result_processor(state: State) -> State:
    """Process tool outputs into messages.

    Args:
        state: Current conversation state

    Returns:
        Updated state with processed messages
    """
    if not state.get("tool_outputs"):
        return {"messages": [], "tool_calls": [], "tool_outputs": []}

    tool_output = state["tool_outputs"][-1]
    tool_message = ToolMessage(
        content=tool_output["content"],
        tool_call_id=tool_output["tool_call_id"],
        name=tool_output["tool_name"],
    )
    ai_message = AIMessage(content=f"Here's what I found: {tool_output['content']}")

    return {
        "messages": [tool_message, ai_message],
        "tool_calls": [],
        "tool_outputs": [],
    }
def should_end(state: State) -> bool:
    """Determine conversation end condition.

    Args:
        state: Current conversation state

    Returns:
        Boolean indicating end condition
    """
    if not state.get("messages"):
        return False

    last_message = state["messages"][-1]
    return isinstance(last_message, AIMessage) and "helpful" in last_message.content

Step 5: Graph Construction and Execution

We put everything together into a working system.

Why This Matters

Graph construction is crucial because

  1. Defines conversation flow
  2. Manages state transitions
  3. Controls execution order
  4. Enables system testing
def create_conversation_graph() -> StateGraph:
    """Create and configure conversation flow.

    Returns:
        Compiled conversation graph
    """
    graph = StateGraph(State)
    graph.add_node("llm", llm_node)
    graph.add_node("tool_executor", tool_executor)
    graph.add_node("result_processor", result_processor)

    graph.add_edge(START, "llm")
    graph.add_edge("llm", "tool_executor")
    graph.add_edge("tool_executor", "result_processor")

    graph.add_conditional_edges(
        "result_processor", should_end, {True: END, False: "llm"}
    )

    return graph.compile()
def main():
    """Demonstrate the conversation system."""
    state = {"messages": [], "tool_calls": [], "tool_outputs": []}
    chain = create_conversation_graph()
    result = chain.invoke(state)

    print("\nFinal conversation state:")
    for message in result["messages"]:
        print(f"\n{message.__class__.__name__}: {message.content}")

if __name__ == "__main__":
    main()

Common Pitfalls

  1. Missing API key configuration
  2. Improper error handling
  3. Incorrect message type usage
  4. Poor state management
  5. Unclear end conditions

Key Takeaways

  1. State Management: Clean state structure enables reliable operation
  2. Tool Integration: Proper tool setup ensures functionality
  3. Message Types: Different message types serve different purposes
  4. Flow Control: Clear conditions prevent infinite loops

Next Steps

  1. Add new tools (weather API, calculator)
  2. Enhance LLM node capabilities
  3. Improve end conditions
  4. Add error recovery
  5. Implement metrics tracking

Expected Output

Final conversation state

HumanMessage: What is the capital of France? AIMessage: Let me search for information about the capital of France. ToolMessage: Based on the search results, Paris is the capital of France... AIMessage: Here's what I found: Based on the search results, Paris is the capital... AIMessage: I hope that information was helpful!

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter