First of all, thanks for the great work on this solution — it has been very helpful for accelerating document processing workloads on Amazon Web Services.
We are currently using this solution in a multi-project environment and have identified a gap related to cost attribution and usage tracking for AI services.
🚨 Problem Statement
The current implementation appears to use default configurations when invoking:
• Amazon Bedrock
• Amazon Textract
Because of this:
• All usage is aggregated at the account/service level
• There is no clear way to attribute costs per project / tenant / workflow
• Cost analysis via AWS Cost Explorer becomes limited and indirect
🎯 Proposed Feature
We would like to request support for:
-
Bedrock Inference Profiles
• Allow configuration and usage of inference profiles when invoking Bedrock models
-
Textract Adapters
• Add support for Textract adapters in document processing flows
that will give us the opportunity to tag the profile and the adapter and see that in cost explorer
💡 Suggested Implementation Approach
Some ideas that could make this flexible and extensible:
Add configuration options (e.g., in stack parameters or config files) to:
• Specify Bedrock inference profile IDs
• Specify Textract adapter IDs
•
Pass these values dynamically through the pipeline:
• Step Functions / Lambda layers
• Service integration calls
First of all, thanks for the great work on this solution — it has been very helpful for accelerating document processing workloads on Amazon Web Services.
We are currently using this solution in a multi-project environment and have identified a gap related to cost attribution and usage tracking for AI services.
🚨 Problem Statement
The current implementation appears to use default configurations when invoking:
• Amazon Bedrock
• Amazon Textract
Because of this:
• All usage is aggregated at the account/service level
• There is no clear way to attribute costs per project / tenant / workflow
• Cost analysis via AWS Cost Explorer becomes limited and indirect
🎯 Proposed Feature
We would like to request support for:
Bedrock Inference Profiles
• Allow configuration and usage of inference profiles when invoking Bedrock models
Textract Adapters
• Add support for Textract adapters in document processing flows
that will give us the opportunity to tag the profile and the adapter and see that in cost explorer
💡 Suggested Implementation Approach
Some ideas that could make this flexible and extensible:
Add configuration options (e.g., in stack parameters or config files) to:
• Specify Bedrock inference profile IDs
• Specify Textract adapter IDs
•
Pass these values dynamically through the pipeline:
• Step Functions / Lambda layers
• Service integration calls