@@ -6,25 +6,31 @@ has_children: true
66permalink : configuration
77---
88
9-
109# Configuration
10+
1111{: .no_toc }
1212
13- Rqueue has many configuration settings that can be configured either using application config or code.
13+ Rqueue offers many configuration settings that can be adjusted either through the application
14+ configuration or directly in the code.
15+
1416{: .fs-6 .fw-300 }
1517
1618## Table of contents
19+
1720{: .no_toc .text-delta }
1821
19221 . TOC
20- {: toc }
23+ {: toc }
2124
2225---
23- Apart from the basic configuration, it can be customized heavily, like number of tasks it would be
24- executing concurrently. More and more configurations can be provided using
25- ` SimpleRqueueListenerContainerFactory ` class. See SimpleRqueueListenerContainerFactory [ doc] ( https://javadoc.io/doc/com.github.sonus21/rqueue-core/latest/com/github/sonus21/rqueue/config/SimpleRqueueListenerContainerFactory.html ) for more configs.
26+ Apart from the basic configuration, Rqueue can be heavily customized, such as adjusting the number
27+ of tasks executed concurrently. Additional configurations can be provided using
28+ the ` SimpleRqueueListenerContainerFactory ` class. See
29+ SimpleRqueueListenerContainerFactory [ doc] ( https://javadoc.io/doc/com.github.sonus21/rqueue-core/latest/com/github/sonus21/rqueue/config/SimpleRqueueListenerContainerFactory.html )
30+ for more configs.
2631
2732``` java
33+
2834@Configuration
2935public class RqueueConfiguration {
3036 @Bean
@@ -35,19 +41,22 @@ public class RqueueConfiguration {
3541```
3642
3743## Task or Queue Concurrency
38- By default, the number of task executors are twice the number of queues. A custom or shared task
39- executor can be configured using factory's ` setTaskExecutor ` method. It's also possible to provide
40- queue concurrency using ` RqueueListener ` annotation's field ` concurrency ` . The concurrency could be
41- some positive number like 10, or range 5-10. If queue concurrency is provided then each queue will
42- use their own task executor to execute consumed messages, otherwise a shared task executor is used
43- to execute tasks. A global number of workers can be configured using ` setMaxNumWorkers ` method.
44- ` RqueueListener ` annotation also has ` batchSize ` field, the default values are as,
45- listener having concurrency set will fetch 10 messages while other 1.
44+
45+ By default, the number of task executors is twice the number of queues. You can configure a custom
46+ or shared task executor using the factory's ` setTaskExecutor ` method. Additionally, queue
47+ concurrency can be set using the ` RqueueListener ` annotation's ` concurrency ` field, which can be a
48+ positive number like 10 or a range like 5-10. If queue concurrency is specified, each queue will use
49+ its own task executor to handle consumed messages; otherwise, a shared task executor is used.
50+
51+ A global number of workers can be configured using the ` setMaxNumWorkers ` method.
52+ The ` RqueueListener ` annotation also has a ` batchSize ` field. By default, listeners with a
53+ concurrency
54+ set will fetch 10 messages, while others will fetch 1.
4655
4756{: .note}
48- Increasing batch size has its consequences, if your thread pool size is too low in that case
49- you would see many processing jobs since task would be rejected by executor unless you've configured
50- large queueCapacity.
57+ Increasing the batch size has its consequences. If your thread pool size is too low, you may
58+ encounter many processing jobs being rejected by the executor unless you have configured a
59+ large ` queueCapacity ` .
5160
5261``` java
5362class RqueueConfiguration {
@@ -80,9 +89,11 @@ class RqueueConfiguration {
8089}
8190```
8291
83- When a custom executor is provided, then you must set MaxNumWorkers correctly, otherwise thread pool
84- might be over or under utilised. Over utilization of thread pool is not possible, it will reject new
85- tasks, this will lead to delay in message consumption, under utilization can be handled as
92+ When a custom executor is provided, it is essential to set ` MaxNumWorkers ` correctly. Otherwise, the
93+ thread pool might be over- or under-utilized. Over-utilization of the thread pool is not possible,
94+ as it will reject new tasks, leading to delays in message consumption. Under-utilization can be
95+ managed by ensuring proper configuration of the executor and adjusting the ` MaxNumWorkers ` setting
96+ appropriately.
8697
8798```
8899ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
@@ -94,43 +105,51 @@ threadPoolTaskExecutor.afterPropertiesSet();
94105factory.setTaskExecutor(threadPoolTaskExecutor);
95106```
96107
97- In this configuration there are three variables ` corePoolSize ` , ` maxPoolSize ` and ` queueCapacity ` .
108+ In this configuration, there are three key variables: ` corePoolSize ` , ` maxPoolSize ` ,
109+ and ` queueCapacity ` .
98110
99- * ` corePoolSize ` signifies the lower limit of active threads.
100- * ` maxPoolSize ` signifies the upper limit of active threads.
101- * ` queueCapacity ` signify even though we have ` maxPoolSize ` running threads we can have
102- ` queueCapacity ` tasks waiting in the queue, that can be dequeue and executed by the existing thread
103- as soon as the running threads complete the execution.
111+ - ` corePoolSize ` signifies the lower limit of active threads.
112+ - ` maxPoolSize ` signifies the upper limit of active threads.
113+ - ` queueCapacity ` signifies that even if you have ` maxPoolSize ` running threads, you can
114+ have ` queueCapacity ` tasks waiting in the queue, which can be dequeued and executed by the
115+ existing threads as soon as the running threads complete their execution.
104116
105- If you have N queues then you can set maximum number of workers as ` (maxPoolSize + queueCapacity - N ) `
117+ If you have N queues, you can set the maximum number of workers
118+ as ` (maxPoolSize + queueCapacity - N) ` .
106119
107120{: .warning}
108- Here N threads are provided for polling queues, this is not a correct number when ** priority ** is
109- used.
121+ In this context, N threads are allocated for polling queues, but this is not a correct number when *
122+ * priority ** is used.
110123
111- The number of message pollers would be sum of the followings.
124+ The number of message pollers is determined by the sum of the following:
112125
1131261 . Number of unique priority groups.
114- 2 . Number of queues whose priority is provided as ` "critical=5,high=2" ` .
127+ 2 . Number of queues with specified priorities (e.g., ` "critical=5,high=2" ` ) .
1151283 . Number of queues without priority.
116129
117- If you don't want to go into the maths, then you can set
130+ If you prefer not to delve into the calculations, you can set the following:
118131
119- * queueCapacity >= 2 * number of queues
120- * maxPoolSize >= 2 * number of queues
121- * corePoolSize >= number of queues
132+ - ` queueCapacity >= 2 * number of queues `
133+ - ` maxPoolSize >= 2 * number of queues `
134+ - ` corePoolSize >= number of queues `
135+ -
122136
123137{: .note}
124- Whenever you set queue capacity to non-zero then it can create duplicate message problem,
125- since the polled messages are just waiting to be executed, if visibilityTimeout expires than other
126- message listener will pull the same message.
138+ Setting a non-zero ` queueCapacity ` can indeed lead to duplicate message problems. This occurs
139+ because polled messages that are waiting to be executed might have their ` visibilityTimeout ` expire,
140+ causing another message listener to pull the same message again. This scenario can result in
141+ duplicate processing of messages, which can impact the correctness of your application's logic. To
142+ mitigate this issue, it's crucial to carefully configure ` queueCapacity ` and ` visibilityTimeout `
143+ settings to ensure that messages are processed correctly without duplication.
127144
128145## Manual start of the container
129146
130- Whenever container is refreshed or application is started then it is started automatically, it also
131- comes with a graceful shutdown. Automatic start of the container can be controlled
132- using ` autoStartup ` flag, when autoStartup is false then application must call start and stop
133- methods of container. For further graceful shutdown application should call destroy method as well.
147+ When using a container that starts automatically and offers graceful shutdown, you can control its
148+ automatic startup behavior using the ` autoStartup ` flag. If ` autoStartup ` is set to ` false ` , then
149+ your application needs to manually call the ` start ` and ` stop ` methods of the container to control
150+ its lifecycle. Additionally, for a graceful shutdown, you should call the ` destroy ` method when
151+ appropriate. This gives you finer control over when the container starts and stops within your
152+ application's lifecycle.
134153
135154``` java
136155class RqueueConfiguration {
@@ -172,17 +191,15 @@ public class BootstrapController {
172191
173192## Message converters configuration
174193
175- Generally any message can be converted to and from without any problems, though it can be customized
176- by providing an implementation ` org.springframework.messaging. converter.MessageConverter ` , this
177- message converter must implement both the methods of ` MessageConverter ` interface. Implementation
178- must make sure the return type of method ` toMessage ` is ` Message<String> ` while as in the case
179- of ` fromMessage ` any object can be returned as well .
194+ To configure the message converter, you can only use application configuration by specifying the
195+ property ` rqueue.message. converter.provider.class=com.example.MyMessageConverterProvider ` . This
196+ approach allows you to customize message conversion behavior using your own implementation
197+ of ` org.springframework.messaging.converter.MessageConverter ` . Typically, this customization ensures
198+ that messages can be converted to and from various formats smoothly within your application .
180199
181- We can configure message converter only using application configuration using property
182- ` rqueue.message.converter.provider.class=com.example.MyMessageConverterProvider `
183200{: .note}
184- MyMessageConverterProvider class must implement
185- ` com.github.sonus21.rqueue.converter.MessageConverterProvider ` interface.
201+ MyMessageConverterProvider class must
202+ implement ` com.github.sonus21.rqueue.converter.MessageConverterProvider ` interface.
186203
187204``` java
188205class MyMessageConverterProvider implements MessageConverterProvider {
@@ -195,21 +212,30 @@ class MyMessageConverterProvider implements MessageConverterProvider {
195212}
196213```
197214
198- The default implementation is ` DefaultMessageConverterProvider ` , ths converter
199- returns ` DefaultRqueueMessageConverter ` . DefaultRqueueMessageConverter can encode/decode most of the
200- messages, but it will have problem when message classes are not shared across application. If you do
201- not want to share class as jar files then you can
202- use ` com.github.sonus21.rqueue.converter.JsonMessageConverter `
203- or ` org.springframework.messaging.converter.MappingJackson2MessageConverter ` these converters
204- produce ` JSON ` data. Other implementation can be used as well MessagePack, ProtoBuf etc
215+ The default implementation, ` DefaultMessageConverterProvider ` ,
216+ returns ` DefaultRqueueMessageConverter ` . While ` DefaultRqueueMessageConverter ` can handle encoding
217+ and decoding for most messages, it may encounter issues when message classes are not shared across
218+ applications. To avoid sharing classes as JAR files, you can opt for converters such
219+ as ` com.github.sonus21.rqueue.converter.JsonMessageConverter `
220+ or ` org.springframework.messaging.converter.MappingJackson2MessageConverter ` . These converters
221+ serialize messages into JSON format, facilitating interoperability without shared class
222+ dependencies.
223+
224+ Additionally, alternatives like MessagePack or ProtoBuf can also be employed based on specific
225+ requirements for message serialization and deserialization. Each of these options provides
226+ flexibility in how messages are encoded and decoded across different systems and applications.
205227
206228## Additional Configuration
207229
208- - ** rqueue.retry.per.poll** : The number of times, a polled message should be tried before declaring it
209- dead or putting it back in the simple queue. The default value is ` 1 ` , that means a message would
210- be executed only once and next execution will happen on next poll. While if we increase this
211- to ` N ` then the polled message would be tries consecutively N times before it will be made
212- available for other listeners.
213- -
230+ - ** rqueue.retry.per.poll** : This setting determines how many times a polled message should be
231+ retried before declaring it dead or moving it back into the queue for subsequent retries. The
232+ default value is ` 1 ` , meaning a message will be processed once initially, and if it fails, it will
233+ be retried on the next poll. If you increase this value to ` N ` , the polled message will be retried
234+ consecutively N times before it is considered failed and made available for other listeners to
235+ process.
236+
237+ This configuration allows you to control how many times Rqueue attempts to process a message before
238+ handling it as a failed message, giving you flexibility in managing message retries and error
239+ handling strategies.
214240
215241
0 commit comments