๐ฏ What You'll Learn Today
LangGraph Tutorial: Message History Management with Sliding Windows - Unit 1.2 Exercise 3
This tutorial is also available in Google Colab here or for download here
Joint Initiative: This tutorial is part of a collaboration between AI Product Engineer and the Nebius Academy.
This tutorial demonstrates how to implement efficient message history management in LangGraph using a sliding window approach. We'll create a system that maintains conversation context while optimizing memory usage through automatic message pruning.
Key Concepts Covered
- Sliding Window Implementation
- State-Based Message Management
- Graph Construction with History Control
- Memory Optimization Techniques
from typing import Annotated, TypedDict
!pip install langchain-core
!pip install langgraph
from langchain_core.messages import BaseMessage, HumanMessage
from langgraph.graph import START, StateGraph
from langgraph.graph.message import add_messages
Step 1: State Definition with Window Support
We define a state structure that supports windowed message management for efficient conversation history handling.
class State(TypedDict):
"""State container with windowed message history management.
This implementation introduces three crucial elements:
1. Message storage with proper LangGraph annotations
2. Conversation context summarization
3. Configurable window size for history management
Attributes:
messages: List of conversation messages managed by the sliding window
summary: Condensed context from older messages
window_size: Maximum number of messages to retain in active memory
Note:
The window_size parameter is critical for managing memory usage
while maintaining sufficient conversation context.
"""
messages: Annotated[list[BaseMessage], add_messages]
summary: str
window_size: int
Why This Matters
Efficient message history management is crucial because
- Limited memory resources in production environments
- Performance impact of large message histories
- Relevance decay of older messages
- Cost optimization for API calls
Step 2: Window Management Implementation
We implement the sliding window logic for message history control.
def message_windowing(state: State) -> State:
"""Maintain optimal message history through window-based pruning.
This function implements several key concepts:
1. Automatic message pruning
2. Window-based history management
3. State preservation during updates
The windowing process follows this flow:
1. Check current message count
2. Apply window constraints if needed
3. Preserve state consistency
Args:
state: Current conversation state with messages and window configuration
Returns:
State: Updated state with windowed message history
Example:
>>> state = {
... "messages": [HumanMessage(content=f"Message {i}")
... for i in range(5)],
... "summary": "",
... "window_size": 3
... }
>>> result = message_windowing(state)
>>> len(result["messages"]) # Should be 3
"""
if len(state["messages"]) > state["window_size"]:
state["messages"] = state["messages"][-state["window_size"] :]
return state
Debug Tips
- Window Management Issues:
- Log message counts before and after windowing
- Verify oldest messages are being removed
- Check window_size parameter is being respected
- State Consistency:
- Monitor state fields during windowing
- Verify message order preservation
- Validate summary updates
- Common Errors:
- IndexError: Check window size calculations
- State mutation issues: Verify proper state copying
- Message ordering problems: Check slicing logic
Step 3: Graph Construction with Window Management
We create a LangGraph structure that incorporates our windowing logic.
Initialize graph with windowed state support
graph = StateGraph(State)
Add windowing node for message management
graph.add_node("windowing", message_windowing)
Configure entry point with immediate windowing
graph.add_edge(START, "windowing")
Why This Matters
Proper graph construction ensures
- Consistent message processing flow
- Automatic window management
- Scalable conversation handling
Key Takeaways
- Window Management:
- Use sliding windows for memory efficiency
- Maintain recent context while pruning old messages
- Configure window size based on requirements
- State Handling:
- Preserve state consistency during windowing
- Implement clean pruning logic
- Maintain message order
Common Pitfalls
- Incorrect window size calculations
- Not preserving message order
- Missing state field updates
- Improper edge configuration
Next Steps
- Add dynamic window sizing
- Implement context summarization
- Add message priority handling
- Optimize memory usage further
Example Usage
initial_state = {
"messages": [HumanMessage(content=f"Message {i}") for i in range(5)],
"summary": "",
"window_size": 3,
}
result = message_windowing(initial_state)
print(f"Messages after windowing: {len(result['messages'])}")
Variations and Extensions
- Dynamic Window Sizing:
- Adjust window size based on message importance
- Implement priority-based retention Example use case: Adaptive context management
- Smart Pruning:
- Content-based message retention
- Semantic importance scoring Scenario: Context-aware memory management
Expected Output
Messages after windowing: 3
๐ฌ๐ง Chapter