This file tracks Prometheus metric additions, changes, and removals in java-tron. For the full set of metrics emitted today, see the references at the bottom.
4.8.2
-
tron:block_transaction_count(Histogram, labelminer) — per-block transaction count, sampled at the entry ofManager#pushBlockbefore any early return so duplicate, stale, and fork-switched pushes are observed alongside applied blocks. Primary use cases: empty-block detection per super representative, and per-SR TPS / throughput percentile interpolation. Default buckets[0, 20, 50, 80, 100, 120, 140, 160, 180, 200, 230, 260, 300, 500, 2000, 5000, 10000]are densified around 0–300 for percentile interpolation in the typical TPS range; 5000 and 10000 are retained as safety-net buckets to preserve resolution for outlier events such as stress tests or repush storms. (#6624)Operational note: The effective upper bound is 10000; blocks exceeding that land in
+Inf. Monitor the overflow ratio — e.g.(rate(tron_block_transaction_count_bucket{le="+Inf"}[5m]) - rate(tron_block_transaction_count_bucket{le="10000"}[5m])) / rate(tron_block_transaction_count_count[5m]) > 0.01— as a signal to re-tune the upper bound.
tron:sr_set_change(Counter, labelsaction,witness) — incremented once per witness whenever the active SR set rotates at a maintenance boundary.actionis one ofadd/remove. Cardinality grows with the number of distinct witnesses that have ever entered or left the active set, not with the active set size at any given moment. (#6624)
Pre-4.8.2 Baseline
Snapshot of metrics emitted prior to this changelog. Per-version provenance is not tracked here; consult git log on common/src/main/java/org/tron/common/prometheus/ for exact origin of each metric.
tron:header_height(Gauge) — latest block height on this node.tron:header_time(Gauge) — latest block timestamp on this node.tron:block_push_latency_seconds(Histogram) —Manager#pushBlocklatency.tron:block_process_latency_seconds(Histogram, labelsync) —TronNetDelegate#processBlocklatency.tron:block_generate_latency_seconds(Histogram, labeladdress) — block generation latency per producer.tron:block_fetch_latency_seconds(Histogram) — block fetch latency.tron:block_receive_delay_seconds(Histogram) —receiveTime - blockTime.tron:block_fork(Counter, labeltype) — fork events by type.tron:lock_acquire_latency_seconds(Histogram, labeltype) — DB / chain lock acquisition latency.tron:miner(Counter, labelsminer,type) — blocks produced by an SR.tron:miner_latency_seconds(Histogram, labelminer) — block mining latency per producer.tron:miner_delay_seconds(Histogram, labelminer) —actualTime - planTimefor block production.tron:txs(Counter, labelstype,detail) — transaction counts.tron:process_transaction_latency_seconds(Histogram, labelstype,contract) — transaction processing latency.tron:verify_sign_latency_seconds(Histogram, labeltype) — signature verification latency for transactions and blocks.tron:tx_cache(Gauge, labeltype) — transaction cache stats.tron:manager_queue_size(Gauge, labeltype) —Managerqueue sizes (pending / popped / queued / repush).
tron:peers(Gauge, labeltype) — peer counts.tron:p2p_error(Counter, labeltype) — P2P error events.tron:p2p_disconnect(Counter, labeltype) — P2P disconnect events.tron:ping_pong_latency_seconds(Histogram) — peer ping-pong RTT.tron:message_process_latency_seconds(Histogram, labeltype) — peer message processing latency.tron:tcp_bytes(Histogram, labeltype) — TCP traffic.tron:udp_bytes(Histogram, labeltype) — UDP traffic.
tron:http_service_latency_seconds(Histogram, labelurl) — HTTP endpoint latency.tron:http_bytes(Histogram, labelsurl,status) — HTTP traffic.tron:grpc_service_latency_seconds(Histogram, labelendpoint) — gRPC endpoint latency.tron:jsonrpc_service_latency_seconds(Histogram, labelmethod) — JSON-RPC method latency.tron:internal_service_latency_seconds(Histogram, labelsclass,method) — internal service-call latency.tron:internal_service_fail(Counter, labelsclass,method) — internal service-call failure count.
tron:db_size_bytes(Gauge, labelstype,db,level) — storage size in bytes per engine, database, and level;typeis the storage engine (LEVELDBorROCKSDB) depending on node configuration.tron:db_sst_level(Gauge, labelstype,db,level) — SST files per compaction level per engine and database;typeis the storage engine (LEVELDBorROCKSDB) depending on node configuration.tron:guava_cache_hit_rate(Gauge, labeltype) — hit rate of a Guava cache;typeis the cache name.tron:guava_cache_request(Gauge, labeltype) — total request count of a Guava cache;typeis the cache name.tron:guava_cache_eviction_count(Gauge, labeltype) — eviction count of a Guava cache;typeis the cache name.- (Registered via
GuavaCacheExportsfor caches that opt in toCacheManager.)
tron:error_info(Counter, labelstopic,type) — incremented on every ERROR-level log line byInstrumentedAppender.
Emitted by OperatingSystemExports (no labels):
system_available_cpus,process_cpu_load,system_cpu_load,system_load_average,system_total_physical_memory_bytes,system_free_physical_memory_bytes,system_total_swap_spaces_bytes,system_free_swap_spaces_bytes.
Auto-emitted by the Prometheus client library via DefaultExports.initialize() (simpleclient_hotspot). The full list is owned by the upstream library and not enumerated here; see the client_java docs. Common ones: jvm_memory_bytes_*, jvm_gc_collection_seconds_*, jvm_threads_*, process_cpu_seconds_total, process_open_fds, process_resident_memory_bytes.
References
- Official metrics documentation — descriptions, configuration, and example queries.
- tron-docker
metric_monitor/README.md— operator-oriented overview with deployment guidance. - java-tron-server Grafana dashboard — maintained reference dashboard JSON.