๐ฏ What You'll Learn Today
LangGraph Tutorial: Building a Tool-Enabled Conversational Agent - Unit 2.1 Exercise 5
This tutorial is also available in Google Colab here or for download here
Joint Initiative: This tutorial is part of a collaboration between AI Product Engineer and the Nebius Academy.
This tutorial guides you through building a complete tool-enabled conversational agent using LangGraph and LangChain. The system demonstrates proper state management, tool integration, and conversation flow control.
Key Concepts Covered
- State Management in Conversational AI
- Tool Integration and Execution
- Message Type Handling
- Graph-based Conversation Flow
- Error Handling in AI Systems
import os
import uuid
from typing import Annotated, Any, TypedDict
!pip install langchain-core
!pip install langgraph
!pip install langchain-community
from langchain_community.tools import TavilySearchResults
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, ToolMessage
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages
Step 1: Environment and State Setup
We begin with environment configuration and state definition.
Why This Matters
Proper setup is crucial because
- Ensures consistent tool access
- Enables clean state management
- Provides type safety
- Facilitates debugging
Debug Tips
-
Environment Setup:
- Verify API key presence
- Check environment variables
- Monitor tool initialization
- Test state structure
Configure Tavily API key
os.environ["TAVILY_API_KEY"] = "your-tavily-api-key-here"
class State(TypedDict):
"""Defines the conversation state structure.
This state implementation tracks:
1. Message history with proper annotation
2. Tool call specifications
3. Tool execution results
Attributes:
messages: Conversation history with add_messages annotation
tool_calls: Pending tool executions
tool_outputs: Results from tool operations
"""
messages: Annotated[list[BaseMessage], add_messages]
tool_calls: list[dict]
tool_outputs: list[Any]
Step 2: LLM Node Implementation
We implement the core decision-making logic.
Why This Matters
LLM node implementation is crucial because
- Controls conversation flow
- Makes tool usage decisions
- Manages message generation
- Handles conversation state
Debug Tips
-
LLM Node Behavior:
- Monitor decision points
- Track state changes
- Verify message handling
- Check tool call generation
def llm_node(state: State) -> State:
"""Simulates LLM decision-making in conversation.
This function demonstrates:
1. Initial state handling
2. Message analysis
3. Tool call decisions
4. Response generation
Args:
state: Current conversation state
Returns:
Updated state with new messages/tool calls
"""
if not state.get("messages"):
return {
"messages": [HumanMessage(content="What is the capital of France?")],
"tool_calls": [],
"tool_outputs": [],
}
last_message = state["messages"][-1].content
if (
isinstance(state["messages"][-1], HumanMessage)
and "capital of France" in last_message
):
return {
"messages": [
AIMessage(
content="Let me search for information about the capital of France."
)
],
"tool_calls": [
{
"tool_name": "TavilySearchResults",
"args": {"query": "capital of France"},
"id": str(uuid.uuid4()),
}
],
}
return {
"messages": [AIMessage(content="I hope that information was helpful!")],
"tool_calls": [],
"tool_outputs": [],
}
Step 3: Tool Execution Implementation
We implement tool execution and result handling.
Why This Matters
Tool execution is crucial because
- Provides external information
- Handles API interactions
- Manages error cases
- Structures responses
Debug Tips
-
Tool Execution:
- Monitor API calls
- Track error handling
- Verify output format
- Check state updates
def tool_executor(state: State) -> State:
"""Executes tools and processes results.
Args:
state: Current conversation state
Returns:
Updated state with tool results
"""
if not state.get("tool_calls"):
return {"tool_outputs": []}
tool_call = state["tool_calls"][-1]
tool_call_id = tool_call.get("id", str(uuid.uuid4()))
tool_call["id"] = tool_call_id
tavily_tool = TavilySearchResults()
try:
if tool_call["tool_name"] == "TavilySearchResults":
output = tavily_tool.invoke(tool_call["args"])
if output:
return {
"tool_outputs": [
{
"content": "Based on the search results, Paris is the capital of France. "
"It is the country's largest city and a major European cultural "
"and economic center.",
"tool_call_id": tool_call_id,
"tool_name": "TavilySearchResults",
}
]
}
return {
"tool_outputs": [
{
"content": "No relevant information found.",
"tool_call_id": tool_call_id,
"tool_name": "TavilySearchResults",
}
]
}
except Exception as e:
return {
"tool_outputs": [
{
"content": f"I encountered an error while searching: {e!s}",
"tool_call_id": tool_call_id,
"tool_name": "TavilySearchResults",
}
]
}
return {"tool_outputs": []}
Step 4: Result Processing Implementation
We implement result processing and message generation.
Why This Matters
Result processing is crucial because
- Formats responses consistently
- Maintains message type structure
- Enables proper conversation flow
- Supports error handling
def result_processor(state: State) -> State:
"""Process tool outputs into messages.
Args:
state: Current conversation state
Returns:
Updated state with processed messages
"""
if not state.get("tool_outputs"):
return {"messages": [], "tool_calls": [], "tool_outputs": []}
tool_output = state["tool_outputs"][-1]
tool_message = ToolMessage(
content=tool_output["content"],
tool_call_id=tool_output["tool_call_id"],
name=tool_output["tool_name"],
)
ai_message = AIMessage(content=f"Here's what I found: {tool_output['content']}")
return {
"messages": [tool_message, ai_message],
"tool_calls": [],
"tool_outputs": [],
}
def should_end(state: State) -> bool:
"""Determine conversation end condition.
Args:
state: Current conversation state
Returns:
Boolean indicating end condition
"""
if not state.get("messages"):
return False
last_message = state["messages"][-1]
return isinstance(last_message, AIMessage) and "helpful" in last_message.content
Step 5: Graph Construction and Execution
We put everything together into a working system.
Why This Matters
Graph construction is crucial because
- Defines conversation flow
- Manages state transitions
- Controls execution order
- Enables system testing
def create_conversation_graph() -> StateGraph:
"""Create and configure conversation flow.
Returns:
Compiled conversation graph
"""
graph = StateGraph(State)
graph.add_node("llm", llm_node)
graph.add_node("tool_executor", tool_executor)
graph.add_node("result_processor", result_processor)
graph.add_edge(START, "llm")
graph.add_edge("llm", "tool_executor")
graph.add_edge("tool_executor", "result_processor")
graph.add_conditional_edges(
"result_processor", should_end, {True: END, False: "llm"}
)
return graph.compile()
def main():
"""Demonstrate the conversation system."""
state = {"messages": [], "tool_calls": [], "tool_outputs": []}
chain = create_conversation_graph()
result = chain.invoke(state)
print("\nFinal conversation state:")
for message in result["messages"]:
print(f"\n{message.__class__.__name__}: {message.content}")
if __name__ == "__main__":
main()
Common Pitfalls
- Missing API key configuration
- Improper error handling
- Incorrect message type usage
- Poor state management
- Unclear end conditions
Key Takeaways
- State Management: Clean state structure enables reliable operation
- Tool Integration: Proper tool setup ensures functionality
- Message Types: Different message types serve different purposes
- Flow Control: Clear conditions prevent infinite loops
Next Steps
- Add new tools (weather API, calculator)
- Enhance LLM node capabilities
- Improve end conditions
- Add error recovery
- Implement metrics tracking
Expected Output
Final conversation state
HumanMessage: What is the capital of France? AIMessage: Let me search for information about the capital of France. ToolMessage: Based on the search results, Paris is the capital of France... AIMessage: Here's what I found: Based on the search results, Paris is the capital... AIMessage: I hope that information was helpful!
๐ฌ๐ง Chapter