The enhanced tool adapter includes comprehensive structured logging using Go's standard log/slog package. This provides production-ready observability for monitoring, debugging, and operational insights.
Note: This document focuses on logging events and log analysis. For metrics collection, performance monitoring, and alerting, see METRICS.md.
When: Tools are injected into system prompt
Level: INFO
Message: "Transformed request: moved tools into system prompt"
Key Fields:
tool_count: Number of tools being transformedtool_names: Array of tool namesprompt_length: Length of generated system prompt
{
"time": "2024-01-30T10:30:15Z",
"level": "INFO",
"msg": "Transformed request: moved tools into system prompt",
"tool_count": 2,
"tool_names": ["get_weather", "calculate_tip"],
"prompt_length": 245
}When: Function calls are detected and converted
Level: INFO
Message: "Transformed response: detected and converted function calls"
Key Fields:
function_count: Number of function calls detectedfunction_names: Array of function names calledcontent_length: Length of original response contentjson_candidates: Number of JSON blocks foundfunction_arguments: Function arguments (DEBUG level only)
{
"time": "2024-01-30T10:30:15Z",
"level": "INFO",
"msg": "Transformed response: detected and converted function calls",
"function_count": 2,
"function_names": ["get_weather", "calculate_tip"],
"content_length": 134,
"json_candidates": 1
}Level: DEBUG
Events:
- Tool prompt generation timing (using
time.Durationfor precision) - JSON parsing performance with nanosecond accuracy
- Function call extraction duration
Example DEBUG log:
{
"time": "2024-01-30T10:30:15Z",
"level": "DEBUG",
"msg": "Built tool prompt",
"tool_count": 3,
"prompt_length": 387,
"build_duration": "2.3ms"
}Level: ERROR
Events:
- Tool prompt generation failures
- JSON parsing errors
- Function call validation errors
Provides a custom logger instance for complete control over formatting, output destination, and levels.
// Production: JSON to stdout
prodLogger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
adapter := tooladapter.New(
tooladapter.WithLogger(prodLogger),
)Convenience option for controlling log level with default text formatting.
adapter := tooladapter.New(
tooladapter.WithLogLevel(slog.LevelDebug),
)Use explicit configuration:
- WithLogger(...) for custom handlers (JSON/text)
- WithLogLevel(...) for simple level-only control
logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelInfo,
})).With(
"service", "my-api",
"version", "1.2.3",
"environment", "production",
)- Production:
INFOlevel for operational events - Development:
DEBUGlevel for detailed troubleshooting - Testing:
ERRORlevel to minimize noise
// Add request-specific context
logger := slog.With(
"request_id", requestID,
"user_id", userID,
"correlation_id", correlationID,
)
adapter := tooladapter.New(tooladapter.WithLogger(logger))The adapter automatically excludes function arguments from INFO level logs. They only appear at DEBUG level, making it safe for production use.
adapter := tooladapter.New(tooladapter.WithLogLevel(slog.LevelDebug))- Text formatting for readability
- DEBUG level for detailed information
- Source code locations included
adapter := tooladapter.New(tooladapter.WithLogLevel(slog.LevelError))- ERROR level only to reduce test noise
- Focus on functionality over logging
// Use environment variable to control logging
level := slog.LevelInfo
if os.Getenv("LOG_LEVEL") == "debug" {
level = slog.LevelDebug
}
adapter := tooladapter.New(tooladapter.WithLogLevel(level))# Find slow transformations (durations are logged as human-readable strings)
cat logs.json | jq 'select(.msg | contains("Built tool prompt")) | select(.build_duration | test("^[0-9]+[0-9.]+(ms|s)"))'
# Performance analysis across operations
cat logs.json | jq 'select(.build_duration) | {tools: .tool_count, duration: .build_duration}'# Most popular tools
cat logs.json | jq -r '.function_names[]?' | sort | uniq -c | sort -nr
# Function call patterns
cat logs.json | jq 'select(.function_count > 1) | {count: .function_count, names: .function_names}'# Streaming vs non-streaming detection
cat logs.json | jq 'select(.streaming == true) | {streaming: .streaming, functions: .function_names}'
cat logs.json | jq 'select(.streaming == false) | {streaming: .streaming, functions: .function_names}'# Extract timing information from debug logs
cat logs.json | jq 'select(.build_duration) | {
timestamp: .time,
operation: .msg,
duration: .build_duration,
tool_count: .tool_count
}'
# Find operations taking longer than expected
cat logs.json | jq 'select(.build_duration | test("^[0-9]+(ms|μs|ns)")) | select(.build_duration | test("^[1-9][0-9]+(ms|s)"))'Logging provides operational visibility through structured events, while metrics provide quantitative measurements for monitoring and alerting. For comprehensive production observability:
- Use logging for debugging, error tracking, and operational event analysis
- Use metrics for performance monitoring, alerting, and dashboard creation
- Combine both for complete system observability
Logging provides human-readable timing information in log events:
{
"msg": "Built tool prompt",
"build_duration": "2.3ms",
"tool_count": 3
}Metrics provide precise time.Duration values for quantitative analysis:
// Metrics callback receives precise timing
eventData.Performance.ProcessingDuration // time.Duration with nanosecond precisionSee METRICS.md for detailed metrics integration, including:
- Performance monitoring with Prometheus/DataDog using precise
time.Durationmeasurements - Function call success rates and timing distributions
- Processing time breakdowns and optimization insights
This logging strategy provides comprehensive observability for debugging and operational awareness, complementing the metrics system for complete production monitoring with both human-readable logs and precise quantitative measurements.