The memory analyzer is now integrated into Teranode and available on all services that have the profiler enabled.
The memory analyzer handler is registered in daemon/daemon_services.go in the startProfilerAndMetrics function (line 189):
// memory analyzer support (includes mmap, non-heap memory)
logger.Infof("Memory analyzer available on http://%s/debug/memory", profilerAddr)
mux.HandleFunc("/debug/memory", profiling.MemoryProfileHandler)The memory analyzer is available when:
ProfilerAddris set in settings (e.g.,:6060)- The service has started successfully
- You're running on Linux (requires
/proc/[pid]/smaps)
For any Teranode service with profiler enabled:
# Get complete memory breakdown with top 20 regions (default)
curl http://localhost:6060/debug/memory
# Get breakdown with top 50 regions
curl http://localhost:6060/debug/memory?top=50
# Get breakdown with top 100 regions
curl http://localhost:6060/debug/memory?top=100# Port-forward to a pod
kubectl port-forward subtree-validator-pod 6060:6060
# In another terminal, get memory analysis
curl http://localhost:6060/debug/memory > memory_analysis.txt# If service exposes port 6060
curl http://localhost:6060/debug/memoryThe memory analyzer is available on all services that have the profiler enabled, including:
- Blockchain
- Block Assembly
- Block Validation
- Subtree Validation
- Validator
- Propagation
- Asset Server
- RPC
- P2P
- Legacy
- Alert
- Pruner
- Block Persister
- UTXO Persister
=== Complete Memory Breakdown ===
Go Runtime:
Heap: 2048 MB
Stack: 128 MB
Metadata: 256 MB
Subtotal: 2432 MB
Named Memory Regions (mmap):
TXMetaCache 4096 MB
Subtotal: 4096 MB
File-Backed Memory:
Shared Libraries 64 MB
Teranode Binary 181 MB
Subtotal: 245 MB
Anonymous Memory:
Read-Write: 512 MB
Read-Only: 32 MB
Executable: 16 MB
Subtotal: 560 MB
=== Totals ===
Virtual Memory: 12288 MB
RSS (Resident): 7333 MB
PSS (Proportional): 7333 MB
=== Top 20 Memory Regions ===
ADDRESS PERMS RSS PRIVATE SHARED NAME
--------------------------------------------------
7f8a40000000... rw-p 4096 MB 4096 MB 0 MB [anon: TXMetaCache]
7f8a30000000... rw-p 2048 MB 2048 MB 0 MB [heap]
...
-
internal/profiling/memory_analyzer.go- Core memory analysis logic
- Parses
/proc/self/smaps - Categorizes memory by type
-
internal/profiling/memory_handler.go- HTTP handler for web access
- Query parameter support
-
internal/profiling/README.md- Detailed usage documentation
-
cmd/memanalyzer/main.go- Standalone CLI tool
- Can analyze any process by PID
daemon/daemon_services.go- Added import:
"github.com/bsv-blockchain/teranode/internal/profiling" - Registered handler in
startProfilerAndMetricsfunction
- Added import:
The memory analyzer joins other debugging endpoints available on the same port:
/debug/pprof/- Go profiler (CPU, memory, goroutines, etc.)/debug/pprof/profile- CPU profile/debug/pprof/heap- Heap profile/debug/pprof/goroutine- Goroutine dump/debug/fgprof- Full goroutine profiler/debug/memory- Complete memory analyzer (NEW)/metrics- Prometheus metrics (if enabled)
-
Check overall RSS:
kubectl top pod subtree-validator-pod
-
Get complete breakdown:
curl http://localhost:6060/debug/memory > memory.txt -
Compare with heap profile:
curl http://localhost:6060/debug/pprof/heap > heap.prof go tool pprof -http=:8080 heap.prof -
Analyze the difference:
- If RSS >> heap: Check TXMetaCache and anonymous memory
- If RSS ≈ heap: Issue is in Go allocations
- If TXMetaCache is large: Check configuration
Add to monitoring dashboards:
# Cron job or monitoring script
while true; do
curl -s http://localhost:6060/debug/memory | \
grep "RSS (Resident)" | \
awk '{print $3}' | \
send_to_metrics_system
sleep 60
done- Linux: Fully supported (requires
/proc/[pid]/smaps) - macOS: Not supported (no
/procfilesystem) - Windows: Not supported
For non-Linux platforms, the handler will return an error message.
- Parsing
/proc/self/smaps: ~1-5ms - Memory overhead: <1MB
- Safe to call frequently (e.g., every 10 seconds)
- No GC pressure
This means:
- Running on non-Linux platform
/procnot mounted (container misconfiguration)- Insufficient permissions
This means:
- Profiler not enabled (check
ProfilerAddrsetting) - Service hasn't started yet
- Wrong port or service
This means:
- TXMetaCache hasn't been initialized yet
- Not using the mmap tracking code
- Running on non-Linux (regions won't have names)
Planned improvements:
- Export metrics to Prometheus format
- Alert on abnormal memory growth
- Historical tracking and trends
- Integration with pprof UI
- Support for other platforms (macOS, Windows)