You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/manage/pages/cluster-maintenance/manage-throughput.adoc
+31-6Lines changed: 31 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,19 +1,32 @@
1
1
= Manage Throughput
2
2
:description: Manage the throughput of Kafka traffic with configurable properties.
3
3
:page-categories: Management, Networking
4
+
// tag::single-source[]
4
5
5
-
Redpanda supports applying throughput throttling on both ingress and egress independently, and allows configuration at the broker and client levels. The purpose of this is to prevent unbounded network and disk usage of the broker by clients. Broker-wide limits apply to all clients connected to the broker and restrict total traffic on the broker. Client limits apply to a set of clients defined by their `client_id` and help prevent a set of clients from starving other clients using the same broker.
6
+
Redpanda supports throughput throttling on both ingress and egress independently, and allows configuration at the broker and client levels. This helps prevent clients from causing unbounded network and disk usage on brokers.
7
+
8
+
* Broker-wide limits apply to all clients connected to the broker and restrict total traffic on the broker.
9
+
* Client limits apply to a set of clients defined by their `client_id` and help prevent a set of clients from starving other clients using the same broker. You can manage client quotas with xref:reference:rpk/rpk-cluster/rpk-cluster-quotas.adoc[`rpk cluster quotas`] or with the Kafka API. When no quotas apply, the client has unlimited throughput.
10
+
11
+
ifdef::env-cloud[]
12
+
NOTE: Throughput throttling is supported for BYOC and Dedicated clusters.
13
+
endif::[]
6
14
7
15
== Throughput throttling enforcement
8
16
9
17
NOTE: As of v24.2, Redpanda enforces all throughput limits per broker, including client throughput.
10
18
11
-
Throughput limits are enforced by applying backpressure to clients. When a connection is in breach of the throughput limit, the throttler advises the client about the delay (throttle time) that would bring the rate back to the allowed level. Redpanda starts by adding a `throttle_time_ms` field to responses. If that isn't honored, delays are inserted on the connection's next read operation. The throttling delay may not exceed the limit set by xref:reference:tunable-properties.adoc#max_kafka_throttle_delay_ms[`max_kafka_throttle_delay_ms`].
19
+
Throughput limits are enforced by applying backpressure to clients. When a connection is in breach of the throughput limit, the throttler advises the client about the delay (throttle time) that would bring the rate back to the allowed level. Redpanda starts by adding a `throttle_time_ms` field to responses. If that isn't honored, delays are inserted on the connection's next read operation.
20
+
21
+
ifndef::env-cloud[]
22
+
The throttling delay may not exceed the limit set by xref:reference:tunable-properties.adoc#max_kafka_throttle_delay_ms[`max_kafka_throttle_delay_ms`].
23
+
endif::[]
12
24
13
25
== Broker-wide throughput limits
14
26
15
27
Broker-wide throughput limits account for all Kafka API traffic going into or out of the broker, as data is produced to or consumed from a topic. The limit values represent the allowed rate of data in bytes per second passing through in each direction. Redpanda also provides administrators the ability to exclude clients from throughput throttling and to fine-tune which Kafka request types are subject to throttling limits.
16
28
29
+
ifndef::env-cloud[]
17
30
=== Broker-wide throughput limit properties
18
31
19
32
The properties for broker-wide throughput quota balancing are configured at the cluster level, for all brokers in a cluster:
@@ -40,8 +53,9 @@ The properties for broker-wide throughput quota balancing are configured at the
40
53
41
54
[NOTE]
42
55
====
43
-
* By default, both `kafka_throughput_limit_node_in_bps` and `kafka_throughput_limit_node_out_bps` are disabled, and no throughput limits are applied. You must manually set them to enable throughput throttling.
56
+
By default, both `kafka_throughput_limit_node_in_bps` and `kafka_throughput_limit_node_out_bps` are disabled, and no throughput limits are applied. You must manually set them to enable throughput throttling.
44
57
====
58
+
endif::[]
45
59
46
60
== Client throughput limits
47
61
@@ -71,9 +85,13 @@ It is possible to create conflicting quotas if you configure the same quotas thr
71
85
72
86
. Quota configured through the Kafka API for an exact match on `client_id`
73
87
. Quota configured through the Kafka API for a prefix match on `client_id`
74
-
. Quota configured through cluster configuration properties (`kafka_client_group_byte_rate_quota`, `kafka_client_group_fetch_byte_rate_quota`, xref:upgrade:deprecated/index.adoc[deprecated starting in v24.2]) for a prefix match on `client_id`
88
+
ifndef::env-cloud[]
89
+
. Quota configured through cluster configuration properties (`kafka_client_group_byte_rate_quota`, `kafka_client_group_fetch_byte_rate_quota`-deprecated in v24.2) for a prefix match on `client_id`
90
+
endif::[]
75
91
. Default quota configured through the Kafka API on `client_id`
76
-
. Default quota configured through cluster configuration properties (`target_quota_byte_rate`, `target_fetch_quota_byte_rate`, `kafka_admin_topic_api_rate`, xref:upgrade:deprecated/index.adoc[deprecated starting in v24.2]) on `client_id`
92
+
ifndef::env-cloud[]
93
+
. Default quota configured through cluster configuration properties (`target_quota_byte_rate`, `target_fetch_quota_byte_rate`, `kafka_admin_topic_api_rate`-deprecated in v24.2) on `client_id`
94
+
endif::[]
77
95
78
96
Redpanda recommends <<migrate,migrating>> over from cluster configuration-managed quotas to Kafka-compatible quotas. You can re-create the configuration-based quotas with `rpk`, and then remove the cluster configurations.
79
97
@@ -94,6 +112,7 @@ client-id=consumer-1
94
112
producer_byte_rate=140000
95
113
----
96
114
115
+
97
116
To set a throughput quota for a single client, use the xref:reference:rpk/rpk-cluster/rpk-cluster-quotas-alter.adoc[`rpk cluster quotas alter`] command.
NOTE: A client group specified with `client-id-prefix` is not the equivalent of a Kafka consumer group. It is used only to match requests based on the `client_id` prefix. The `client_id` field is typically a configurable property when you create a client with Kafka libraries.
120
139
140
+
121
141
=== Default client throughput limit
122
142
123
143
You can apply default throughput limits to clients. Redpanda applies the default limits if no quotas are configured for a specific `client_id` or prefix.
@@ -222,14 +242,18 @@ You can also use Redpanda Console to view enforced limits. In the menu, go to **
222
242
223
243
=== Monitor client throughput
224
244
225
-
The following metrics are available on both the `/public_metrics` and `/metrics` endpoints to provide insights into client throughput quota usage:
245
+
The following metrics provide insights into client throughput quota usage:
226
246
227
247
* Client quota throughput per rule and quota type:
0 commit comments