Skip to content

Commit 0dcad00

Browse files
committed
add dead-queue docs
1 parent 08cb7f1 commit 0dcad00

21 files changed

Lines changed: 215 additions & 33 deletions

File tree

_sidebar.idoc.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
@global-contents-table-plugin-input|links-list
1717
- Action
1818
@global-contents-table-plugin-action|links-list
19-
- Output
19+
- [Output](/plugin/output/README.md)
2020
@global-contents-table-plugin-output|links-list
2121

2222
- **Pipeline**

_sidebar.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@
5050
- [split](plugin/action/split/README.md)
5151
- [throttle](plugin/action/throttle/README.md)
5252

53-
- Output
53+
- [Output](/plugin/output/README.md)
5454
- [clickhouse](plugin/output/clickhouse/README.md)
5555
- [devnull](plugin/output/devnull/README.md)
5656
- [elasticsearch](plugin/output/elasticsearch/README.md)

plugin/README.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -764,6 +764,8 @@ It sends the event batches to Clickhouse database using
764764

765765
File.d uses low level Go client - [ch-go](https://github.com/ClickHouse/ch-go) to provide these features.
766766

767+
Supports [dead queue](/plugin/output/README.md#dead-queue).
768+
767769
[More details...](plugin/output/clickhouse/README.md)
768770
## devnull
769771
It provides an API to test pipelines and other plugins.
@@ -773,6 +775,8 @@ It provides an API to test pipelines and other plugins.
773775
It sends events into Elasticsearch. It uses `_bulk` API to send events in batches.
774776
If a network error occurs, the batch will infinitely try to be delivered to the random endpoint.
775777

778+
Supports [dead queue](/plugin/output/README.md#dead-queue).
779+
776780
[More details...](plugin/output/elasticsearch/README.md)
777781
## file
778782
It sends event batches into files.
@@ -796,18 +800,26 @@ GELF messages are separated by null byte. Each message is a JSON with the follow
796800
Every field with an underscore prefix `_` will be treated as an extra field.
797801
Allowed characters in field names are letters, numbers, underscores, dashes, and dots.
798802

803+
Supports [dead queue](/plugin/output/README.md#dead-queue).
804+
799805
[More details...](plugin/output/gelf/README.md)
800806
## kafka
801807
It sends the event batches to kafka brokers using `franz-go` lib.
802808

809+
Supports [dead queue](/plugin/output/README.md#dead-queue).
810+
803811
[More details...](plugin/output/kafka/README.md)
804812
## loki
805813
It sends the logs batches to Loki using HTTP API.
806814

815+
Supports [dead queue](/plugin/output/README.md#dead-queue).
816+
807817
[More details...](plugin/output/loki/README.md)
808818
## postgres
809819
It sends the event batches to postgres db using pgx.
810820

821+
Supports [dead queue](/plugin/output/README.md#dead-queue).
822+
811823
[More details...](plugin/output/postgres/README.md)
812824
## s3
813825
Sends events to s3 output of one or multiple buckets.
@@ -936,6 +948,8 @@ Out:
936948
}
937949
```
938950

951+
Supports [dead queue](/plugin/output/README.md#dead-queue).
952+
939953
[More details...](plugin/output/splunk/README.md)
940954
## stdout
941955
It writes events to stdout(also known as console).

plugin/output/README.idoc.md

Lines changed: 72 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,74 @@
11
# Output plugins
22

3-
@global-contents-table-plugin-output|contents-table
3+
@global-contents-table-plugin-output|contents-table
4+
5+
## dead queue
6+
7+
Failed events from the main pipeline are redirected to a dead-letter queue (DLQ) to prevent data loss and enable recovery.
8+
9+
### Examples
10+
11+
#### Dead queue to the reserve elasticsearch
12+
13+
Consumes logs from a Kafka topic. Sends them to Elasticsearch (primary cluster). Fails over to a reserve ("dead-letter") Elasticsearch if the primary is unavailable.
14+
15+
```yaml
16+
main_pipeline:
17+
input:
18+
type: kafka
19+
brokers:
20+
- kafka:9092
21+
topics:
22+
- logs
23+
output:
24+
type: elasticsearch
25+
workers_count: 32
26+
endpoints:
27+
- http://elasticsearch-primary:9200
28+
# route to reserve elasticsearch
29+
deadqueue:
30+
endpoints:
31+
- http://elasticsearch-reserve:9200
32+
type: elasticsearch
33+
```
34+
35+
#### Dead queue with second kafka topic and low priority consumer
36+
37+
Main Pipeline: Processes logs from Kafka → Elasticsearch. Failed events go to a dead-letter Kafka topic.
38+
39+
Dead-Queue Pipeline: Re-processes failed events from the DLQ topic with lower priority.
40+
41+
```yaml
42+
main_pipeline:
43+
input:
44+
type: kafka
45+
brokers:
46+
- kafka:9092
47+
topics:
48+
- logs
49+
output:
50+
type: elasticsearch
51+
workers_count: 32
52+
endpoints:
53+
- http://elasticsearch:9200
54+
# route to deadqueue pipeline
55+
deadqueue:
56+
brokers:
57+
- kafka:9092
58+
default_topic: logs-deadqueue
59+
type: kafka
60+
61+
deadqueue_pipeline:
62+
input:
63+
type: kafka
64+
brokers:
65+
- kafka:9092
66+
topics:
67+
- logs-deadqueue
68+
output:
69+
type: elasticsearch
70+
workers_count: 1 # low priority
71+
fatal_on_failed_insert: false
72+
endpoints:
73+
- http://elasticsearch:9200
74+
```

plugin/output/README.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@ It sends the event batches to Clickhouse database using
77

88
File.d uses low level Go client - [ch-go](https://github.com/ClickHouse/ch-go) to provide these features.
99

10+
Supports [dead queue](/plugin/output/README.md#dead-queue).
11+
1012
[More details...](plugin/output/clickhouse/README.md)
1113
## devnull
1214
It provides an API to test pipelines and other plugins.
@@ -16,6 +18,8 @@ It provides an API to test pipelines and other plugins.
1618
It sends events into Elasticsearch. It uses `_bulk` API to send events in batches.
1719
If a network error occurs, the batch will infinitely try to be delivered to the random endpoint.
1820

21+
Supports [dead queue](/plugin/output/README.md#dead-queue).
22+
1923
[More details...](plugin/output/elasticsearch/README.md)
2024
## file
2125
It sends event batches into files.
@@ -39,18 +43,26 @@ GELF messages are separated by null byte. Each message is a JSON with the follow
3943
Every field with an underscore prefix `_` will be treated as an extra field.
4044
Allowed characters in field names are letters, numbers, underscores, dashes, and dots.
4145

46+
Supports [dead queue](/plugin/output/README.md#dead-queue).
47+
4248
[More details...](plugin/output/gelf/README.md)
4349
## kafka
4450
It sends the event batches to kafka brokers using `franz-go` lib.
4551

52+
Supports [dead queue](/plugin/output/README.md#dead-queue).
53+
4654
[More details...](plugin/output/kafka/README.md)
4755
## loki
4856
It sends the logs batches to Loki using HTTP API.
4957

58+
Supports [dead queue](/plugin/output/README.md#dead-queue).
59+
5060
[More details...](plugin/output/loki/README.md)
5161
## postgres
5262
It sends the event batches to postgres db using pgx.
5363

64+
Supports [dead queue](/plugin/output/README.md#dead-queue).
65+
5466
[More details...](plugin/output/postgres/README.md)
5567
## s3
5668
Sends events to s3 output of one or multiple buckets.
@@ -179,9 +191,82 @@ Out:
179191
}
180192
```
181193

194+
Supports [dead queue](/plugin/output/README.md#dead-queue).
195+
182196
[More details...](plugin/output/splunk/README.md)
183197
## stdout
184198
It writes events to stdout(also known as console).
185199

186200
[More details...](plugin/output/stdout/README.md)
201+
202+
## dead queue
203+
204+
Failed events from the main pipeline are redirected to a dead-letter queue (DLQ) to prevent data loss and enable recovery.
205+
206+
### Examples
207+
208+
#### Dead queue to the reserve elasticsearch
209+
210+
Consumes logs from a Kafka topic. Sends them to Elasticsearch (primary cluster). Fails over to a reserve ("dead-letter") Elasticsearch if the primary is unavailable.
211+
212+
```yaml
213+
main_pipeline:
214+
input:
215+
type: kafka
216+
brokers:
217+
- kafka:9092
218+
topics:
219+
- logs
220+
output:
221+
type: elasticsearch
222+
workers_count: 32
223+
endpoints:
224+
- http://elasticsearch-primary:9200
225+
# route to reserve elasticsearch
226+
deadqueue:
227+
endpoints:
228+
- http://elasticsearch-reserve:9200
229+
type: elasticsearch
230+
```
231+
232+
#### Dead queue with second kafka topic and low priority consumer
233+
234+
Main Pipeline: Processes logs from Kafka → Elasticsearch. Failed events go to a dead-letter Kafka topic.
235+
236+
Dead-Queue Pipeline: Re-processes failed events from the DLQ topic with lower priority.
237+
238+
```yaml
239+
main_pipeline:
240+
input:
241+
type: kafka
242+
brokers:
243+
- kafka:9092
244+
topics:
245+
- logs
246+
output:
247+
type: elasticsearch
248+
workers_count: 32
249+
endpoints:
250+
- http://elasticsearch:9200
251+
# route to deadqueue pipeline
252+
deadqueue:
253+
brokers:
254+
- kafka:9092
255+
default_topic: logs-deadqueue
256+
type: kafka
257+
258+
deadqueue_pipeline:
259+
input:
260+
type: kafka
261+
brokers:
262+
- kafka:9092
263+
topics:
264+
- logs-deadqueue
265+
output:
266+
type: elasticsearch
267+
workers_count: 1 # low priority
268+
fatal_on_failed_insert: false
269+
endpoints:
270+
- http://elasticsearch:9200
271+
```
187272
<br>*Generated using [__insane-doc__](https://github.com/vitkovskii/insane-doc)*

plugin/output/clickhouse/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@ It sends the event batches to Clickhouse database using
55

66
File.d uses low level Go client - [ch-go](https://github.com/ClickHouse/ch-go) to provide these features.
77

8+
Supports [dead queue](/plugin/output/README.md#dead-queue).
9+
810
### Config params
911
**`addresses`** *`[]Address`* *`required`*
1012

@@ -128,8 +130,7 @@ File.d will fall with non-zero exit code or skip message (see fatal_on_failed_in
128130

129131
**`fatal_on_failed_insert`** *`bool`* *`default=false`*
130132

131-
After an insert error, fall with a non-zero exit code or not
132-
**Experimental feature**
133+
After an insert error, fall with a non-zero exit code or not. A configured deadqueue disables fatal exits.
133134

134135
<br>
135136

plugin/output/clickhouse/clickhouse.go

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@ It sends the event batches to Clickhouse database using
3030
[Native protocol](https://clickhouse.com/docs/en/interfaces/tcp/).
3131
3232
File.d uses low level Go client - [ch-go](https://github.com/ClickHouse/ch-go) to provide these features.
33+
34+
Supports [dead queue](/plugin/output/README.md#dead-queue).
3335
}*/
3436

3537
const (
@@ -247,8 +249,7 @@ type Config struct {
247249

248250
// > @3@4@5@6
249251
// >
250-
// > After an insert error, fall with a non-zero exit code or not
251-
// > **Experimental feature**
252+
// > After an insert error, fall with a non-zero exit code or not. A configured deadqueue disables fatal exits.
252253
FatalOnFailedInsert bool `json:"fatal_on_failed_insert" default:"false"` // *
253254

254255
// > @3@4@5@6

plugin/output/elasticsearch/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22
It sends events into Elasticsearch. It uses `_bulk` API to send events in batches.
33
If a network error occurs, the batch will infinitely try to be delivered to the random endpoint.
44

5+
Supports [dead queue](/plugin/output/README.md#dead-queue).
6+
57
### Config params
68
**`endpoints`** *`[]string`* *`required`*
79

@@ -128,8 +130,7 @@ File.d will fall with non-zero exit code or skip message (see fatal_on_failed_in
128130

129131
**`fatal_on_failed_insert`** *`bool`* *`default=false`*
130132

131-
After an insert error, fall with a non-zero exit code or not
132-
**Experimental feature**
133+
After an insert error, fall with a non-zero exit code or not. A configured deadqueue disables fatal exits.
133134

134135
<br>
135136

plugin/output/elasticsearch/elasticsearch.go

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@ import (
2323
/*{ introduction
2424
It sends events into Elasticsearch. It uses `_bulk` API to send events in batches.
2525
If a network error occurs, the batch will infinitely try to be delivered to the random endpoint.
26+
27+
Supports [dead queue](/plugin/output/README.md#dead-queue).
2628
}*/
2729

2830
const (
@@ -169,8 +171,7 @@ type Config struct {
169171

170172
// > @3@4@5@6
171173
// >
172-
// > After an insert error, fall with a non-zero exit code or not
173-
// > **Experimental feature**
174+
// > After an insert error, fall with a non-zero exit code or not. A configured deadqueue disables fatal exits.
174175
FatalOnFailedInsert bool `json:"fatal_on_failed_insert" default:"false"` // *
175176

176177
// > @3@4@5@6

plugin/output/gelf/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ GELF messages are separated by null byte. Each message is a JSON with the follow
1616
Every field with an underscore prefix `_` will be treated as an extra field.
1717
Allowed characters in field names are letters, numbers, underscores, dashes, and dots.
1818

19+
Supports [dead queue](/plugin/output/README.md#dead-queue).
20+
1921
### Config params
2022
**`endpoint`** *`string`* *`required`*
2123

@@ -121,8 +123,7 @@ After this timeout the batch will be sent even if batch isn't completed.
121123

122124
**`retry`** *`int`* *`default=0`*
123125

124-
Retries of insertion. If File.d cannot insert for this number of attempts,
125-
File.d will fall with non-zero exit code or skip message (see fatal_on_failed_insert).
126+
After an insert error, fall with a non-zero exit code or not. A configured deadqueue disables fatal exits.
126127

127128
<br>
128129

0 commit comments

Comments
 (0)