You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
`FlinkAutoscalerEvaluator` is a pluggable component that allows users to provide custom scaling-metric evaluation logic on top of the metrics evaluated internally by the autoscaler. Custom evaluators are discovered through the [Plugins](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/plugins) mechanism when running inside the Kubernetes operator, and through the standard Java `ServiceLoader` mechanism when running with `flink-autoscaler-standalone`. In both cases the implementation class must be registered in `META-INF/services`.
195
+
196
+
For each evaluation cycle, the autoscaler invokes the custom evaluator selected via the `job.autoscaler.metrics.custom-evaluator.name` configuration option once per job vertex. The metrics returned by the custom evaluator are merged on top of the internally evaluated metrics, allowing users to override or augment specific `ScalingMetric` values (e.g. `TARGET_DATA_RATE`, `TRUE_PROCESSING_RATE`, `CATCH_UP_DATA_RATE`).
197
+
198
+
The following steps demonstrate how to develop and use a custom evaluator.
199
+
200
+
1. Implement the `FlinkAutoscalerEvaluator` interface:
if (target > 0 && context.getTopology().isSource(vertex)) {
230
+
overrides.put(
231
+
ScalingMetric.TARGET_DATA_RATE,
232
+
EvaluatedScalingMetric.avg(target));
233
+
}
234
+
return overrides;
235
+
}
236
+
}
237
+
```
238
+
239
+
The `Context` object exposes an un-modifiable view of the job configuration, the metrics history, previously evaluated vertex metrics (evaluation happens topologically), the job topology, backlog status, max restart time, and the evaluator-specific configuration.
240
+
241
+
2. Create the service definition file `org.apache.flink.autoscaler.metrics.FlinkAutoscalerEvaluator` in `META-INF/services` with the fully-qualified class name of your implementation:
3. Use the Maven tool to package the project and generate the custom evaluator JAR.
247
+
248
+
4. Select the custom evaluator via configuration. The evaluator whose `getName()` matches the configured name will be invoked; any `job.autoscaler.metrics.custom-evaluator.<name>.*` entries are surfaced to the evaluator via `Context#getCustomEvaluatorConf()` (with the `job.autoscaler.metrics.custom-evaluator.<name>.` prefix stripped):
**Only one custom evaluator per pipeline is supported**. The `job.autoscaler.metrics.custom-evaluator.name` is a single-valued option and the autoscaler resolves and invokes exactly one evaluator per evaluation cycle. Registering multiple implementations via `META-INF/services` is fine as they form a registry that different jobs can select from by name, but a single job cannot chain or compose more than one evaluator.
255
+
{{< /hint >}}
256
+
257
+
5. Deploy the evaluator.
258
+
259
+
- **Operator deployment** – create a Dockerfile to build a custom image from the `apache/flink-kubernetes-operator` official image and copy the generated JAR to a custom evaluator plugin directory under `/opt/flink/plugins` (the value of the `FLINK_PLUGINS_DIR` environment variable in the flink-kubernetes-operator helm chart). The structure of the custom evaluator directory under `/opt/flink/plugins` is as follows:
260
+
```text
261
+
/opt/flink/plugins
262
+
├── custom-evaluator
263
+
│ ├── custom-evaluator.jar
264
+
└── ...
265
+
```
266
+
267
+
With the custom evaluator directory location, the Dockerfile is defined as follows:
268
+
```shell script
269
+
FROM apache/flink-kubernetes-operator
270
+
ENV FLINK_PLUGINS_DIR=/opt/flink/plugins
271
+
ENV CUSTOM_EVALUATOR_DIR=custom-evaluator
272
+
RUN mkdir $FLINK_PLUGINS_DIR/$CUSTOM_EVALUATOR_DIR
Install the flink-kubernetes-operator helm chart with the custom image and verify the `deploy/flink-kubernetes-operator` log has:
277
+
```text
278
+
o.a.f.k.o.a.AutoscalerUtils [INFO ] Discovered custom evaluator from plugin directory[/opt/flink/plugins]: org.apache.flink.autoscaler.custom.CustomEvaluator.
279
+
```
280
+
281
+
- **Standalone autoscaler** – simply place the custom evaluator JAR on the classpath of the `flink-autoscaler-standalone` process. It will be picked up automatically via Java's `ServiceLoader` and discovery will be logged:
282
+
```text
283
+
o.a.f.a.s.AutoscalerUtils [INFO ] Discovered custom evaluator via ServiceLoader:org.apache.flink.autoscaler.custom.CustomEvaluator (name=custom-evaluator).
`FlinkAutoscalerEvaluator` is a pluggable component that allows users to provide custom scaling-metric evaluation logic on top of the metrics evaluated internally by the autoscaler. Custom evaluators are discovered through the [Plugins](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/plugins) mechanism when running inside the Kubernetes operator, and through the standard Java `ServiceLoader` mechanism when running with `flink-autoscaler-standalone`. In both cases the implementation class must be registered in `META-INF/services`.
195
+
196
+
For each evaluation cycle, the autoscaler invokes the custom evaluator selected via the `job.autoscaler.metrics.custom-evaluator.name` configuration option once per job vertex. The metrics returned by the custom evaluator are merged on top of the internally evaluated metrics, allowing users to override or augment specific `ScalingMetric` values (e.g. `TARGET_DATA_RATE`, `TRUE_PROCESSING_RATE`, `CATCH_UP_DATA_RATE`).
197
+
198
+
The following steps demonstrate how to develop and use a custom evaluator.
199
+
200
+
1. Implement the `FlinkAutoscalerEvaluator` interface:
if (target > 0 && context.getTopology().isSource(vertex)) {
230
+
overrides.put(
231
+
ScalingMetric.TARGET_DATA_RATE,
232
+
EvaluatedScalingMetric.avg(target));
233
+
}
234
+
return overrides;
235
+
}
236
+
}
237
+
```
238
+
239
+
The `Context` object exposes an un-modifiable view of the job configuration, the metrics history, previously evaluated vertex metrics (evaluation happens topologically), the job topology, backlog status, max restart time, and the evaluator-specific configuration.
240
+
241
+
2. Create the service definition file `org.apache.flink.autoscaler.metrics.FlinkAutoscalerEvaluator` in `META-INF/services` with the fully-qualified class name of your implementation:
3. Use the Maven tool to package the project and generate the custom evaluator JAR.
247
+
248
+
4. Select the custom evaluator via configuration. The evaluator whose `getName()` matches the configured name will be invoked; any `job.autoscaler.metrics.custom-evaluator.<name>.*` entries are surfaced to the evaluator via `Context#getCustomEvaluatorConf()` (with the `job.autoscaler.metrics.custom-evaluator.<name>.` prefix stripped):
**Only one custom evaluator per pipeline is supported**. The `job.autoscaler.metrics.custom-evaluator.name` is a single-valued option and the autoscaler resolves and invokes exactly one evaluator per evaluation cycle. Registering multiple implementations via `META-INF/services` is fine as they form a registry that different jobs can select from by name, but a single job cannot chain or compose more than one evaluator.
255
+
{{< /hint >}}
256
+
257
+
5. Deploy the evaluator.
258
+
259
+
- **Operator deployment** - create a Dockerfile to build a custom image from the `apache/flink-kubernetes-operator` official image and copy the generated JAR to a custom evaluator plugin directory under `/opt/flink/plugins` (the value of the `FLINK_PLUGINS_DIR` environment variable in the flink-kubernetes-operator helm chart). The structure of the custom evaluator directory under `/opt/flink/plugins` is as follows:
260
+
```text
261
+
/opt/flink/plugins
262
+
├── custom-evaluator
263
+
│ ├── custom-evaluator.jar
264
+
└── ...
265
+
```
266
+
267
+
With the custom evaluator directory location, the Dockerfile is defined as follows:
268
+
```shell script
269
+
FROM apache/flink-kubernetes-operator
270
+
ENV FLINK_PLUGINS_DIR=/opt/flink/plugins
271
+
ENV CUSTOM_EVALUATOR_DIR=custom-evaluator
272
+
RUN mkdir $FLINK_PLUGINS_DIR/$CUSTOM_EVALUATOR_DIR
Install the flink-kubernetes-operator helm chart with the custom image and verify the `deploy/flink-kubernetes-operator` log has:
277
+
```text
278
+
o.a.f.k.o.a.AutoscalerUtils [INFO ] Discovered custom evaluator from plugin directory[/opt/flink/plugins]: org.apache.flink.autoscaler.custom.CustomEvaluator.
279
+
```
280
+
281
+
- **Standalone autoscaler** - simply place the custom evaluator JAR on the classpath of the `flink-autoscaler-standalone` process. It will be picked up automatically via Java's `ServiceLoader` and discovery will be logged:
282
+
```text
283
+
o.a.f.a.s.AutoscalerUtils [INFO ] Discovered custom evaluator via ServiceLoader:org.apache.flink.autoscaler.custom.CustomEvaluator (name=custom-evaluator).
Copy file name to clipboardExpand all lines: docs/layouts/shortcodes/generated/auto_scaler_configuration.html
+6Lines changed: 6 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -92,6 +92,12 @@
92
92
<td><p>Enum</p></td>
93
93
<td>Metric aggregator to use for busyTime metrics. This affects how true processing/output rate will be computed. Using max allows us to handle jobs with data skew more robustly, while avg may provide better stability when we know that the load distribution is even.<br/><br/>Possible values:<ul><li>"AVG"</li><li>"MAX"</li><li>"MIN"</li></ul></td>
Copy file name to clipboardExpand all lines: flink-autoscaler-standalone/src/main/java/org/apache/flink/autoscaler/standalone/StandaloneAutoscalerEntrypoint.java
0 commit comments