HDDS-8511. Enforce strict S3-compliant name for object store buckets#9462
HDDS-8511. Enforce strict S3-compliant name for object store buckets#9462ivandika3 merged 9 commits intoapache:masterfrom
Conversation
…me including non-S3 compliant charaters when the om server side config 'ozone.om.namespace.s3.strict' is set false
|
Thanks @Russole for working on this, |
There was a problem hiding this comment.
Thanks @Russole for picking this up.
There are tests: TestOMBucketCreateRequestWithFSO and TestOMBucketCreateRequest.
Could you please add a test for this case in the FSO file?
Also, there is a method of creating linked buckets through the ozone sh bucket link (LinkBucketHandler code) command.
This also might need a check. You can choose to handle this in this PR, or create a different jira too.
adoroszlai
left a comment
There was a problem hiding this comment.
Thanks @Russole for working on this.
Please also update:
- test cases in
TestOMBucketCreateRequest(search forisStrictS3) - description of config property in
hadoop-hdds/common/src/main/resources/ozone-default.xml
|
|
||
| // Convert BucketLayoutProto -> BucketLayout enum. | ||
| BucketLayout bucketLayout = | ||
| BucketLayout.valueOf(bucketInfo.getBucketLayout().name()); | ||
|
|
||
| // Determine if this bucket belongs to the S3 namespace. | ||
| String s3VolumeName = ozoneManager.getConfiguration().get( | ||
| OzoneConfigKeys.OZONE_S3_VOLUME_NAME, | ||
| OzoneConfigKeys.OZONE_S3_VOLUME_NAME_DEFAULT); | ||
| boolean isS3Bucket = | ||
| s3VolumeName != null && s3VolumeName.equals(bucketInfo.getVolumeName()); | ||
|
|
||
| // Compute effectiveStrict based on global strict flag, S3 namespace, | ||
| // and bucket layout. | ||
| boolean globalStrict = ozoneManager.isStrictS3(); | ||
| boolean effectiveStrict = globalStrict; | ||
|
|
||
|
|
||
| // When strict=false, only S3 buckets with FSO layout should allow | ||
| // non-S3-compliant characters (e.g. underscore). | ||
| // All other S3 bucket layouts must still enforce strict naming rules. | ||
| if (!globalStrict && isS3Bucket) { | ||
| if (bucketLayout == BucketLayout.FILE_SYSTEM_OPTIMIZED) { | ||
| // S3 + FSO + strict=false → bypass strict S3 validation (allow '_' and other non-S3 characters) | ||
| effectiveStrict = false; | ||
| } else { | ||
| // S3 + non-FSO + strict=false → strict S3 validation still applies | ||
| // '_' and other non-S3 characters are not permitted | ||
| effectiveStrict = true; | ||
| } | ||
| } | ||
|
|
||
| // Verify resource name | ||
| OmUtils.validateBucketName(bucketInfo.getBucketName(), | ||
| ozoneManager.isStrictS3()); | ||
| effectiveStrict); |
There was a problem hiding this comment.
- Considering bucket links, multitenancy, and that S3 volume name can be updated in the config, I don't think this should be restricted to buckets in the S3 volume.
- Check eligibility using
bucketLayout#isObjectStoreto handleLEGACY, too. - Use existing conversion logic
BucketLayout#fromProto.
| // Convert BucketLayoutProto -> BucketLayout enum. | |
| BucketLayout bucketLayout = | |
| BucketLayout.valueOf(bucketInfo.getBucketLayout().name()); | |
| // Determine if this bucket belongs to the S3 namespace. | |
| String s3VolumeName = ozoneManager.getConfiguration().get( | |
| OzoneConfigKeys.OZONE_S3_VOLUME_NAME, | |
| OzoneConfigKeys.OZONE_S3_VOLUME_NAME_DEFAULT); | |
| boolean isS3Bucket = | |
| s3VolumeName != null && s3VolumeName.equals(bucketInfo.getVolumeName()); | |
| // Compute effectiveStrict based on global strict flag, S3 namespace, | |
| // and bucket layout. | |
| boolean globalStrict = ozoneManager.isStrictS3(); | |
| boolean effectiveStrict = globalStrict; | |
| // When strict=false, only S3 buckets with FSO layout should allow | |
| // non-S3-compliant characters (e.g. underscore). | |
| // All other S3 bucket layouts must still enforce strict naming rules. | |
| if (!globalStrict && isS3Bucket) { | |
| if (bucketLayout == BucketLayout.FILE_SYSTEM_OPTIMIZED) { | |
| // S3 + FSO + strict=false → bypass strict S3 validation (allow '_' and other non-S3 characters) | |
| effectiveStrict = false; | |
| } else { | |
| // S3 + non-FSO + strict=false → strict S3 validation still applies | |
| // '_' and other non-S3 characters are not permitted | |
| effectiveStrict = true; | |
| } | |
| } | |
| // Verify resource name | |
| OmUtils.validateBucketName(bucketInfo.getBucketName(), | |
| ozoneManager.isStrictS3()); | |
| effectiveStrict); | |
| BucketLayout bucketLayout = BucketLayout.fromProto(bucketInfo.getBucketLayout()); | |
| boolean strict = ozoneManager.isStrictS3() | |
| || bucketLayout.isObjectStore(ozoneManager.getConfig().isFileSystemPathEnabled()); | |
| OmUtils.validateBucketName(bucketInfo.getBucketName(), strict); |
|
Thanks @sreejasahithi, @adoroszlai, and @Tejaskriya for the detailed reviews. |
adoroszlai
left a comment
There was a problem hiding this comment.
Thanks @Russole for updating the patch.
| OmConfig omConfig = ozoneManager.getConfig(); | ||
| boolean fsPathEnabled = false; | ||
| if (omConfig != null) { | ||
| fsPathEnabled = omConfig.isFileSystemPathEnabled(); | ||
| } |
There was a problem hiding this comment.
Instead of adding this logic, please mock ozoneManager.getConfig() in TestOMClientRequestWithUserInfo:
like:
| protected OMBucketCreateRequest doPreExecute(String volumeName, | ||
| String bucketName, | ||
| OzoneManagerProtocolProtos.BucketLayoutProto layout) throws Exception { |
There was a problem hiding this comment.
nit: Please do not format method signature like this. Whenever visibility / return type / method name / other modifiers are changed, we would have to reindent all parameters.
| // Explicitly set bucket layout to OBJECT_STORE so the test doesn't depend on | ||
| // defaults or mocked OM behavior. | ||
| OzoneManagerProtocolProtos.BucketInfo.Builder bucketInfo = | ||
| newBucketInfoBuilder(bucketName, volumeName) | ||
| .setBucketLayout( | ||
| OzoneManagerProtocolProtos.BucketLayoutProto.OBJECT_STORE); |
There was a problem hiding this comment.
Then we don't need this new test class, test case can be added in TestOMBucketCreateRequest.
|
I re-ran the job on my fork and it passed. |
ivandika3
left a comment
There was a problem hiding this comment.
Thanks for the patch. Left some comments.
|
Thanks @ivandika3 and @ashishkumar50 for the review. I've addressed the comments and updated the patch accordingly. |
|
Thanks @Russole for the patch and @adoroszlai @ashishkumar50 for the reviews. |
|
Thanks @ivandika3, @sreejasahithi, @adoroszlai, @Tejaskriya, and @ashishkumar50 for the reviews. Thanks a lot! |
What changes were proposed in this pull request?
This patch limits the non-strict naming behavior only to FSO buckets, which do not rely on S3 naming semantics.
Legacy/ObjectStore buckets—which represent the S3 bucket layout in Ozone—continue to enforce strict, S3-compliant naming rules, even when strict mode is disabled.
In summary, this PR ensures that only FSO buckets may use non-strict bucket names when
ozone.om.namespace.s3.strict=false; S3 buckets (Legacy/ObjectStore) must remain strict.
Appoach
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-8511
How was this patch tested?
I relied on existing unit and integration tests, and verified that the full CI passed on my fork.
Please let me know if additional tests are needed.