Few important points to consider that will help troubleshoot or prevent any potential issues or errors while running the Flink application.
-
When packaging the Flink application and its dependencies into a Jar file that can be deployed to the Flink environment, make sure that the following dependency is added in pom.xml for the
KafkaCustomKeystoreWithConfigProvidersJava project. Not having this dependency will result in aClassNotFoundException(for any class e.g.SecretsManagerConfigProviderthat is referenced from the packagecom.amazonaws.kafka.config.providers).<dependency> <groupId>com.amazonaws</groupId> <artifactId>msk-config-providers</artifactId> <version>0.0.1-SNAPSHOT-uber</version> </dependency> -
While configuring Managed Apache Flink application, please ensure that the VPC connectivity, Subnets and Security Groups (under ‘Networking’ section) are correctly selected and are allowing access to the required resources, e.g. Kafka cluster and brokers. Depending on the setup e.g. for mTLS you may need to add a self-reference inbound rule to the security group for port 9094. Also, check if there are any Kafka ACLs set on the respective topic(s) for authorization and if the required operation(s) have the
ALLOWpermissionType. -
While configuring the Runtime properties for the Apache Flink application, please ensure that the value for
keystore.bucketdoes not contain the prefixs3://. This is different fromApplication code locationsection where the specified Amazon S3 bucket needs to have the formats3://bucket.Also, the path to S3 object(s) e.g.keystore.pathdoesn't need a trailing slash. -
When running the Apache Flink application, if you are getting a
SecretsManagerExceptionwith Status Code 400 (e.g. not authorized to perform:secretsmanager:GetSecretValueon resource: SSL_KEYSTORE_PASS because no identity-based policy allows thesecretsmanager:GetSecretValueaction), please make sure that the IAM Role for the application has the necessary permission policy for SecretsManager.