Tutorial Image: LangGraph Tutorial: Processing Tool Results - Unit 2.1 Exercise 4

LangGraph Tutorial: Processing Tool Results - Unit 2.1 Exercise 4

Learn to process raw tool outputs into structured messages and integrate them into conversation flows with LangGraph. This tutorial covers result handling, state management, and flow control to create coherent, dynamic workflows.

๐ŸŽฏ What You'll Learn Today

LangGraph Tutorial: Processing Tool Results - Unit 2.1 Exercise 4

This tutorial is also available in Google Colab here or for download here

Joint Initiative: This tutorial is part of a collaboration between AI Product Engineer and the Nebius Academy.

This tutorial demonstrates how to process tool outputs in LangGraph, converting raw results into structured messages and managing conversation flow.

Key Concepts Covered

  1. Tool Output Processing
  2. Message Type Management
  3. Conversation Flow Control
  4. Graph-based Processing
import json
import uuid
from typing import Annotated, Any, TypedDict
#!pip install langchain-core
#!pip install langgraph
from langchain_core.messages import AIMessage, BaseMessage, ToolMessage
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages

Step 1: State Definition

We define our state structure for result processing.

Why This Matters

Proper state management is crucial because

  1. Ensures consistent message handling
  2. Maintains tool execution context
  3. Enables clean conversation flow
  4. Supports debugging and monitoring

Debug Tips

  1. State Verification:

    • Monitor message list updates
    • Track tool output processing
    • Verify message type transitions
    • Check state consistency
class State(TypedDict):
    """State container for result processing.

    This state implementation tracks:
    1. Message history with special handling
    2. Tool call specifications
    3. Raw tool outputs

    Attributes:
        messages: List of conversation messages
        tool_calls: List of pending tool calls
        tool_outputs: List of raw tool execution results
    """

    messages: Annotated[list[BaseMessage], add_messages]
    tool_calls: list[dict]
    tool_outputs: list[Any]

Step 2: Helper Functions

We implement utility functions for message processing.

Why This Matters

Helper functions are crucial because

  1. Enable clean message type handling
  2. Provide consistent data processing
  3. Support error recovery
  4. Facilitate debugging
def get_last_message_by_type(
    messages: list[BaseMessage], message_type
) -> BaseMessage | None:
    """Find the last message of a specific type.

    Args:
        messages: List of messages to search
        message_type: Type of message to find

    Returns:
        Last message of specified type or None
    """
    for message in reversed(messages):
        if isinstance(message, message_type):
            return message
    return None
def process_tool_output(tool_output: Any) -> tuple[str, Any]:
    """Process raw tool output into usable format.

    Args:
        tool_output: Raw output to process

    Returns:
        Tuple of (processed_content, raw_data)
    """
    try:
        data = json.loads(tool_output) if isinstance(tool_output, str) else tool_output
        content = data.get("result", str(data))
        return content, data
    except json.JSONDecodeError:
        return str(tool_output), tool_output

Step 3: Result Processor Implementation

We implement the core result processing logic.

Why This Matters

Result processing is crucial because

  1. Converts raw outputs to structured messages
  2. Maintains conversation coherence
  3. Enables proper message typing
  4. Supports conversation flow

Debug Tips

  1. Processing Issues:

    • Monitor message creation
    • Track ID generation
    • Verify content formatting
    • Check type conversions
def result_processor(state: State) -> State:
    """Process tool results into appropriate message types.

    This function demonstrates:
    1. Tool output processing
    2. Message type conversion
    3. Conversation structuring

    Args:
        state: Current conversation state

    Returns:
        Updated state with new messages
    """
    if not state.get("tool_outputs"):
        return {"messages": [], "tool_calls": [], "tool_outputs": []}

    tool_output = state["tool_outputs"][-1]
    processed_content, raw_data = process_tool_output(tool_output)

    tool_message = ToolMessage(
        content=str(raw_data),
        tool_call_id=str(uuid.uuid4()),
        name="search_tool",
    )

    ai_message = AIMessage(content=f"Here's what I found: {processed_content}")

    return {
        "messages": [tool_message, ai_message],
        "tool_calls": [],
        "tool_outputs": [],
    }

Step 4: Flow Control Implementation

We implement conversation flow control logic.

Why This Matters

Flow control is crucial because

  1. Manages conversation progression
  2. Prevents infinite loops
  3. Ensures proper termination
  4. Maintains conversation coherence
def should_end(state: State) -> bool:
    """Determine if conversation should end.

    Args:
        state: Current conversation state

    Returns:
        Boolean indicating whether to end
    """
    if not state.get("messages"):
        return False

    last_ai = get_last_message_by_type(state["messages"], AIMessage)
    if not last_ai:
        return False

    return "Here's what I found:" in last_ai.content
def create_processing_graph() -> StateGraph:
    """Create and configure the processing graph.

    Returns:
        Compiled StateGraph
    """
    graph = StateGraph(State)
    graph.add_node("processor", result_processor)
    graph.add_edge(START, "processor")
    graph.add_conditional_edges(
        "processor", should_end, {True: END, False: "processor"}
    )
    return graph.compile()

Step 5: Usage Demonstration

Example showing the complete processing flow.

Debug Tips

  1. Testing:

    • Verify message processing
    • Check flow control
    • Monitor state updates
    • Validate outputs
def main():
    """Demonstrate result processing functionality."""
    initial_state = {
        "tool_outputs": [json.dumps({"result": "Paris is the capital of France"})],
        "messages": [],
        "tool_calls": [],
    }

    chain = create_processing_graph()
    result = chain.invoke(initial_state)

    print("\nFinal conversation state:")
    for message in result["messages"]:
        if isinstance(message, ToolMessage):
            print(f"\nTool Output ({message.name}): {message.content}")
        elif isinstance(message, AIMessage):
            print(f"\nAssistant: {message.content}")
        else:
            print(f"\n{message.__class__.__name__}: {message.content}")

if __name__ == "__main__":
    main()

Common Pitfalls

  1. Not handling all message types
  2. Improper JSON processing
  3. Missing flow control conditions
  4. Incorrect state updates

Key Takeaways

  1. Message Processing: Proper type handling is essential
  2. Flow Control: Clear conditions prevent loops
  3. State Management: Consistent updates maintain coherence
  4. Graph Structure: Clean flow enables proper processing

Next Steps

  1. Add more message types
  2. Enhance error handling
  3. Implement message validation
  4. Add conversation analytics
  5. Extend flow control options

Expected Output

## Final conversation state

Tool Output (search_tool): {"result": "Paris is the capital of France"}
Assistant: Here's what I found: Paris is the capital of France

Rod Rivera

๐Ÿ‡ฌ๐Ÿ‡ง Chapter