๐ฏ What You'll Learn Today
LangGraph Tutorial: Testing Configuration - Unit 2.3 Exercise 9
Try It Yourself
๐ข Joint Initiative
This tutorial is part of a collaboration between
AI Product Engineer
and
Nebius Academy
.
This tutorial demonstrates how to implement robust testing patterns for LangGraph applications, including mock tools, state validation, and scenario testing. Learn how to create reliable test suites for complex graph-based applications.
Key Concepts Covered
- Mock Tool Implementation
- State Validation
- Test Scenarios
- Graph Testing
- Error Handling
import asyncio
from typing import Annotated, Any, TypedDict
!pip install langchain-core
!pip install langgraph
from langchain_core.messages import BaseMessage, SystemMessage
from langchain_core.runnables import RunnableLambda
from langchain_core.tools import tool
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messagesStep 1: Test State Definition
Define state structure for testing purposes.
Why This Matters
Test state definition is crucial because
- Ensures consistent testing environment
- Enables validation checks
- Supports different scenarios
- Facilitates debugging
Debug Tips
-
State Structure:
- Verify required fields
- Check type annotations
- Monitor state mutations
-
Common Issues:
- Missing fields
- Invalid types
- Inconsistent state
class State(TypedDict):
"""State for testing.
Attributes:
messages: Conversation history
pending_tools: Tools awaiting execution
results: Tool execution results
errors: Error messages
validation_results: Validation check results
"""
messages: Annotated[list[BaseMessage], add_messages]
pending_tools: list[dict[str, Any]]
results: dict[str, Any]
errors: dict[str, str]
validation_results: dict[str, bool]Step 2: Mock Tool Implementation
Create mock tools for testing purposes.
Why This Matters
Mock tools are essential because
- Simulates real tool behavior
- Provides controlled responses
- Tests error conditions
- Ensures consistent testing
Debug Tips
-
Mock Implementation:
- Verify error simulation
- Check response formatting
- Monitor async behavior
-
Common Problems:
- Inconsistent responses
- Missing error cases
- Timing issues
@tool
async def mock_tool(query: str) -> str:
"""Mock tool for testing.
Args:
query: Test query string
Returns:
Mock result string
Raises:
ValueError: If query contains "error"
"""
await asyncio.sleep(0.1) # Simulate latency
if "error" in query:
raise ValueError("Simulated error")
return f"Mock result: {query}"Step 3: Validation Implementation
Implement state validation logic.
Why This Matters
Validation is crucial because
- Ensures state integrity
- Catches structural issues
- Validates content types
- Enables early detection
Debug Tips
-
Validation Logic:
- Check all required fields
- Verify data types
- Monitor validation results
-
Common Issues:
- Missing validations
- False positives/negatives
- Performance impact
def validate_state(state: State) -> dict[str, bool]:
"""Validate state structure and content.
Args:
state: State to validate
Returns:
Dictionary of validation results
"""
validations = {
"has_messages": len(state.get("messages", [])) > 0,
"has_valid_tools": all(
{"id", "tool_name", "args"} <= set(t.keys())
for t in state.get("pending_tools", [])
),
"valid_results": all(
isinstance(v, str) for v in state.get("results", {}).values()
),
}
return validationsStep 4: Mock Executor Implementation
Implement test execution with validation.
Why This Matters
Mock execution is essential because
- Tests workflow logic
- Validates state transitions
- Verifies error handling
- Ensures data consistency
Debug Tips
-
Executor Logic:
- Verify state updates
- Check error handling
- Monitor validation
-
Common Problems:
- State corruption
- Missing validations
- Error propagation
async def mock_executor(state: State) -> State:
"""Execute mock tools with validation.
Args:
state: Current test state
Returns:
Updated state with results
"""
if not state.get("pending_tools"):
return state
validations = validate_state(state)
if not all(validations.values()):
return {
**state,
"errors": {"validation": "State validation failed"},
"validation_results": validations,
}
results = {}
errors = {}
for tool_call in state["pending_tools"]:
try:
result = await mock_tool.ainvoke(tool_call["args"]["query"])
results[tool_call["id"]] = result
except Exception as e:
errors[tool_call["id"]] = str(e)
return {
**state,
"results": results,
"errors": errors,
"validation_results": validations,
}Step 5: Test State Generation
Implement test state generation for scenarios.
Why This Matters
Test state generation is crucial because
- Provides consistent test data
- Covers different scenarios
- Tests edge cases
- Ensures comprehensive testing
Debug Tips
-
State Generation:
- Verify scenario coverage
- Check state consistency
- Monitor initialization
-
Common Issues:
- Missing scenarios
- Invalid states
- Incomplete coverage
def get_test_state(scenario: str = "basic") -> State:
"""Create test states for different scenarios.
Args:
scenario: Test scenario name
Returns:
State configured for scenario
"""
states = {
"basic": {
"messages": [SystemMessage(content="Test execution")],
"pending_tools": [
{
"id": "test_1",
"tool_name": "mock_tool",
"args": {"query": "test query"},
}
],
"results": {},
"errors": {},
"validation_results": {},
},
"error": {
"messages": [SystemMessage(content="Error test")],
"pending_tools": [
{
"id": "error_1",
"tool_name": "mock_tool",
"args": {"query": "error test"},
}
],
"results": {},
"errors": {},
"validation_results": {},
},
"invalid": {
"messages": [], # Invalid: no messages
"pending_tools": [{"id": "invalid"}], # Invalid structure
"results": {},
"errors": {},
"validation_results": {},
},
}
return states.get(scenario, states["basic"])Step 6: Test Graph Implementation
Create test graph structure.
Why This Matters
Test graph structure is essential because
- Tests graph construction
- Validates node connections
- Verifies workflow
- Ensures proper routing
Debug Tips
-
Graph Structure:
- Verify node setup
- Check edge connections
- Monitor compilation
-
Common Problems:
- Missing nodes
- Invalid connections
- Compilation errors
def create_test_graph() -> StateGraph:
"""Create test graph with validation.
Returns:
Configured StateGraph for testing
"""
graph = StateGraph(State)
# Add nodes with validation
graph.add_node("executor", RunnableLambda(mock_executor))
graph.add_node("validator", RunnableLambda(validate_state))
# Configure edges
graph.add_edge(START, "validator")
graph.add_edge("validator", "executor")
graph.add_edge("executor", END)
return graphStep 7: Test Execution
Implement test execution and reporting.
Why This Matters
Test execution is crucial because
- Verifies system behavior
- Validates scenarios
- Reports results
- Enables debugging
Debug Tips
-
Test Execution:
- Monitor scenario runs
- Check error handling
- Verify reporting
-
Common Issues:
- Failed scenarios
- Missing results
- Report errors
async def run_test_scenario(scenario: str):
"""Run test with specific scenario.
Args:
scenario: Name of test scenario to run
"""
graph = create_test_graph()
chain = graph.compile()
test_state = get_test_state(scenario)
print(f"\nRunning {scenario} scenario:")
print("Initial state:", test_state["pending_tools"])
try:
result = await chain.ainvoke(test_state)
print("\nValidations:", result["validation_results"])
print("Results:", result["results"])
print("Errors:", result["errors"])
except Exception as e:
print(f"Test failed: {e!s}")async def demonstrate_testing():
"""Run test demonstrations."""
print("Test Configuration Demo")
print("=" * 50)
scenarios = ["basic", "error", "invalid"]
for scenario in scenarios:
await run_test_scenario(scenario)
print("-" * 50)Common Pitfalls
-
Incomplete Testing
- Missing edge cases
- Insufficient scenarios
- Poor error coverage
-
Validation Gaps
- Missing checks
- Weak assertions
- False positives
-
Mock Tool Issues
- Unrealistic behavior
- Missing error cases
- Timing problems
-
State Management
- Inconsistent states
- Missing validation
- State corruption
Key Takeaways
-
Comprehensive Testing
- Multiple scenarios
- Edge case coverage
- Error validation
-
Mock Implementation
- Realistic behavior
- Error simulation
- Consistent results
-
State Validation
- Complete checks
- Type safety
- Error handling
Next Steps
-
Extended Testing
- Add performance tests
- Create stress tests
- Implement integration tests
-
Enhanced Validation
- Add custom validators
- Create validation rules
- Implement assertions
-
Test Reporting
- Create detailed reports
- Add metrics collection
- Implement logging
Expected Output
Test Configuration Demo
Running basic scenario
Initial state: [{'id': 'test_1', 'tool_name': 'mock_tool', 'args': {'query': 'test query'}}]
Validations: {'has_messages': True, 'has_valid_tools': True, 'valid_results': True}
Results: {'test_1': 'Mock result: test query'}
## Errors: {}
if __name__ == "__main__":
import nest_asyncio
nest_asyncio.apply()
asyncio.run(demonstrate_testing())

