🎯 What You'll Learn Today
LangGraph Tutorial: Processing Tool Results - Unit 2.1 Exercise 4
Try It Yourself
📢 Joint Initiative
This tutorial is part of a collaboration between AIPE and Nebius Academy.
This tutorial demonstrates how to process tool outputs in LangGraph, converting raw results into structured messages and managing conversation flow.
Key Concepts Covered
- Tool Output Processing
- Message Type Management
- Conversation Flow Control
- Graph-based Processing
import json
import uuid
from typing import Annotated, Any, TypedDict
#!pip install langchain-core
#!pip install langgraph
from langchain_core.messages import AIMessage, BaseMessage, ToolMessage
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages
Step 1: State Definition
We define our state structure for result processing.
Why This Matters
Proper state management is crucial because
- Ensures consistent message handling
- Maintains tool execution context
- Enables clean conversation flow
- Supports debugging and monitoring
Debug Tips
-
State Verification:
- Monitor message list updates
- Track tool output processing
- Verify message type transitions
- Check state consistency
class State(TypedDict):
"""State container for result processing.
This state implementation tracks:
1. Message history with special handling
2. Tool call specifications
3. Raw tool outputs
Attributes:
messages: List of conversation messages
tool_calls: List of pending tool calls
tool_outputs: List of raw tool execution results
"""
messages: Annotated[list[BaseMessage], add_messages]
tool_calls: list[dict]
tool_outputs: list[Any]
Step 2: Helper Functions
We implement utility functions for message processing.
Why This Matters
Helper functions are crucial because
- Enable clean message type handling
- Provide consistent data processing
- Support error recovery
- Facilitate debugging
def get_last_message_by_type(
messages: list[BaseMessage], message_type
) -> BaseMessage | None:
"""Find the last message of a specific type.
Args:
messages: List of messages to search
message_type: Type of message to find
Returns:
Last message of specified type or None
"""
for message in reversed(messages):
if isinstance(message, message_type):
return message
return None
def process_tool_output(tool_output: Any) -> tuple[str, Any]:
"""Process raw tool output into usable format.
Args:
tool_output: Raw output to process
Returns:
Tuple of (processed_content, raw_data)
"""
try:
data = json.loads(tool_output) if isinstance(tool_output, str) else tool_output
content = data.get("result", str(data))
return content, data
except json.JSONDecodeError:
return str(tool_output), tool_output
Step 3: Result Processor Implementation
We implement the core result processing logic.
Why This Matters
Result processing is crucial because
- Converts raw outputs to structured messages
- Maintains conversation coherence
- Enables proper message typing
- Supports conversation flow
Debug Tips
-
Processing Issues:
- Monitor message creation
- Track ID generation
- Verify content formatting
- Check type conversions
def result_processor(state: State) -> State:
"""Process tool results into appropriate message types.
This function demonstrates:
1. Tool output processing
2. Message type conversion
3. Conversation structuring
Args:
state: Current conversation state
Returns:
Updated state with new messages
"""
if not state.get("tool_outputs"):
return {"messages": [], "tool_calls": [], "tool_outputs": []}
tool_output = state["tool_outputs"][-1]
processed_content, raw_data = process_tool_output(tool_output)
tool_message = ToolMessage(
content=str(raw_data),
tool_call_id=str(uuid.uuid4()),
name="search_tool",
)
ai_message = AIMessage(content=f"Here's what I found: {processed_content}")
return {
"messages": [tool_message, ai_message],
"tool_calls": [],
"tool_outputs": [],
}
Step 4: Flow Control Implementation
We implement conversation flow control logic.
Why This Matters
Flow control is crucial because
- Manages conversation progression
- Prevents infinite loops
- Ensures proper termination
- Maintains conversation coherence
def should_end(state: State) -> bool:
"""Determine if conversation should end.
Args:
state: Current conversation state
Returns:
Boolean indicating whether to end
"""
if not state.get("messages"):
return False
last_ai = get_last_message_by_type(state["messages"], AIMessage)
if not last_ai:
return False
return "Here's what I found:" in last_ai.content
def create_processing_graph() -> StateGraph:
"""Create and configure the processing graph.
Returns:
Compiled StateGraph
"""
graph = StateGraph(State)
graph.add_node("processor", result_processor)
graph.add_edge(START, "processor")
graph.add_conditional_edges(
"processor", should_end, {True: END, False: "processor"}
)
return graph.compile()
Step 5: Usage Demonstration
Example showing the complete processing flow.
Debug Tips
-
Testing:
- Verify message processing
- Check flow control
- Monitor state updates
- Validate outputs
def main():
"""Demonstrate result processing functionality."""
initial_state = {
"tool_outputs": [json.dumps({"result": "Paris is the capital of France"})],
"messages": [],
"tool_calls": [],
}
chain = create_processing_graph()
result = chain.invoke(initial_state)
print("\nFinal conversation state:")
for message in result["messages"]:
if isinstance(message, ToolMessage):
print(f"\nTool Output ({message.name}): {message.content}")
elif isinstance(message, AIMessage):
print(f"\nAssistant: {message.content}")
else:
print(f"\n{message.__class__.__name__}: {message.content}")
if __name__ == "__main__":
main()
Common Pitfalls
- Not handling all message types
- Improper JSON processing
- Missing flow control conditions
- Incorrect state updates
Key Takeaways
- Message Processing: Proper type handling is essential
- Flow Control: Clear conditions prevent loops
- State Management: Consistent updates maintain coherence
- Graph Structure: Clean flow enables proper processing
Next Steps
- Add more message types
- Enhance error handling
- Implement message validation
- Add conversation analytics
- Extend flow control options
Expected Output
## Final conversation state
Tool Output (search_tool): {"result": "Paris is the capital of France"}
Assistant: Here's what I found: Paris is the capital of France