You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Node Exporter Collector is TFO-Agent's built-in replacement for prometheus/node_exporter. It collects detailed system metrics (per-CPU, per-device, per-interface, per-mountpoint) as continuous time-series that flow through both OTLP export and the Prometheus /metrics endpoint.
When enabled, users can disable their standalone node_exporter DaemonSet and get equivalent metrics from TFO-Agent.
Architecture
graph TB
subgraph "TFO-Agent"
subgraph "Node Exporter Collector"
CPU[CPU Sub-collector<br/>per-CPU mode times + freq]
MEM[Memory Sub-collector<br/>detailed memory + swap]
DISK[Disk I/O Sub-collector<br/>per-device counters]
FS[Filesystem Sub-collector<br/>per-mountpoint usage]
NET[Network Sub-collector<br/>per-interface + TCP + ARP]
LOAD[LoadAvg Sub-collector<br/>1m / 5m / 15m]
THERM[Thermal Sub-collector<br/>hardware temperatures]
LINUX[Linux Sub-collector<br/>conntrack, PSI, vmstat,<br/>sockstat, entropy, filedesc, stat]
TF[Textfile Sub-collector<br/>custom *.prom files]
end
SC[System Collector]
KC[Kubernetes Collector]
BUF[Buffer]
OTLP[OTLP Exporter]
PROM_EP["Prometheus /metrics :8888"]
end
subgraph "TelemetryFlow Platform"
API[TFO Backend API]
CH[(ClickHouse)]
end
CPU & MEM & DISK & FS & NET & LOAD & THERM & LINUX & TF --> BUF
SC --> BUF
KC --> BUF
BUF --> OTLP
CPU & MEM & DISK & FS & NET & LOAD & THERM & LINUX & TF --> PROM_EP
SC --> PROM_EP
KC --> PROM_EP
OTLP -->|"OTLP gRPC :4317"| CH
PROM[Prometheus Server] -->|"GET /metrics"| PROM_EP
Loading
Comparison: TFO-Agent vs Separate Components
graph LR
subgraph "Traditional Stack"
NE[node-exporter<br/>DaemonSet]
KSM[kube-state-metrics<br/>Deployment]
PA[Prometheus Agent<br/>StatefulSet]
NE -->|scrape| PA
KSM -->|scrape| PA
PA -->|"remote_write"| PROM_REMOTE[Prometheus Server]
end
subgraph "TFO-Agent Stack"
TFO[TFO-Agent<br/>DaemonSet<br/>= node-exporter<br/>+ KSM<br/>+ Prometheus scrape]
TFO -->|"OTLP gRPC"| TFO_BACKEND[TFO Backend]
TFO -->|"/metrics :8888"| PROM_SCRAPE[Prometheus<br/>optional scrape]
end
style TFO fill:#2d6,color:#fff
Loading
Capability
node-exporter
kube-state-metrics
Prometheus Agent
TFO-Agent
System Metrics
Yes
-
-
Yes (built-in)
Per-CPU Metrics
Yes
-
-
Yes (node_exporter)
Per-Device Disk
Yes
-
-
Yes (node_exporter)
Filesystem Metrics
Yes
-
-
Yes (node_exporter)
Network Metrics
Yes
-
-
Yes (node_exporter)
Linux /proc
Yes
-
-
Yes (conntrack, PSI, etc)
K8s Resource State
-
Yes
-
Yes (k8s collector)
Prometheus Scrape
/metrics
/metrics
Remote Write
/metrics
OTLP Export
-
-
-
Yes (gRPC + HTTP)
Deployment
DaemonSet
Deployment
StatefulSet
DaemonSet (single binary)
Textfile Collector
Yes
-
-
Yes (*.prom files)
Configuration
Minimal
collectors:
node_exporter:
enabled: true
All sub-collectors are enabled by default when node_exporter is enabled.
flowchart TB
NE[NodeExporterCollector.Collect]
NE --> CPU["collectCPU()<br/>gopsutil cpu.Times<br/>gopsutil cpu.Info"]
NE --> LOAD["collectLoadAvg()<br/>gopsutil load.Avg"]
NE --> MEM["collectMemory()<br/>gopsutil mem.VirtualMemory<br/>gopsutil mem.SwapMemory"]
NE --> DISK["collectDiskIO()<br/>gopsutil disk.IOCounters"]
NE --> FS["collectFilesystem()<br/>gopsutil disk.Partitions<br/>gopsutil disk.Usage"]
NE --> NET["collectNetwork()<br/>gopsutil net.IOCounters<br/>gopsutil net.Interfaces<br/>gopsutil net.Connections"]
NE --> THERM["collectThermal()<br/>gopsutil host.SensorsTemperatures"]
NE --> LINUX["collectLinux()<br/>/proc/sys/net/netfilter/*<br/>/proc/pressure/*<br/>/proc/vmstat<br/>/proc/net/sockstat<br/>/proc/stat<br/>/proc/sys/kernel/random/*<br/>/proc/sys/fs/file-nr<br/>/proc/net/arp"]
NE --> TF["collectTextfile()<br/>*.prom file reader"]
subgraph "Cross-Platform (gopsutil)"
CPU
LOAD
MEM
DISK
FS
NET
THERM
end
subgraph "Linux-Only (/proc)"
LINUX
end
subgraph "User-Defined"
TF
end
Loading
Metric Catalog
All metrics use prefix node. internally, which becomes tfo_node_* in Prometheus exposition format.
When both node_exporter and prometheus_server are enabled, metrics are exposed at :8888/metrics:
# HELP tfo_node_cpu_seconds CPU time in seconds
# TYPE tfo_node_cpu_seconds counter
tfo_node_cpu_seconds{cpu="0",mode="user"} 1234.56
tfo_node_cpu_seconds{cpu="0",mode="system"} 567.89
tfo_node_cpu_seconds{cpu="0",mode="idle"} 98765.43
...
# HELP tfo_node_memory_total_bytes Total memory in bytes
# TYPE tfo_node_memory_total_bytes gauge
tfo_node_memory_total_bytes 17179869184
# HELP tfo_node_filesystem_size_bytes Filesystem size in bytes
# TYPE tfo_node_filesystem_size_bytes gauge
tfo_node_filesystem_size_bytes{device="/dev/sda1",mountpoint="/",fstype="ext4"} 512110190592
# HELP tfo_node_network_receive_bytes_total Total bytes received
# TYPE tfo_node_network_receive_bytes_total counter
tfo_node_network_receive_bytes_total{device="eth0"} 12345678
Testing
# Run Node Exporter collector tests
make test-nodeexporter
# Run all unit tests
make test-unit
Metric Count
With all sub-collectors enabled, expect:
Platform
Approximate Metric Series
Linux (8 CPU, 2 disks, 3 NICs, 5 mounts)
120-150+
macOS (8 CPU, 1 disk, 2 NICs, 2 mounts)
90-110+
The actual count depends on the number of CPUs, disks, network interfaces, and mountpoints on the host.
Copyright (c) 2024-2026 DevOpsCorner Indonesia. All rights reserved.