feat: add structured output support via outputConfig to AmazonBedrockChatGenerator#3088
feat: add structured output support via outputConfig to AmazonBedrockChatGenerator#3088Akash504-ai wants to merge 12 commits intodeepset-ai:mainfrom
outputConfig to AmazonBedrockChatGenerator#3088Conversation
|
@Akash504-ai thanks for the contribution! Please add unit and integration tests to make sure the feature works. |
|
I ran formatting on the modified test file, but CI is still reporting a formatting issue on tests/test_chat_generator.py, which does not exist locally under that path. Could you confirm if there is another file or CI-specific path I should format? |
|
run |
|
Thanks! I tried running Just want to make sure I follow the repo’s preferred approach. |
|
You need to apply the run the The hatch command above should update these and make check pass |
|
I ran |
|
run it only over the test files that need reformatting |
|
I followed your suggestion and ran formatting specifically on the affected test files. Locally Black and Ruff both report that all files are already correctly formatted, and no changes are produced. I also aligned the Black version with CI, but the result is still the same. |
|
I ran it myself locally and updated the branch to unblock you. You are probably running an older version of ruff. |
|
Thanks a lot for fixing that and unblocking the PR, really appreciate it! That makes sense regarding the Ruff version mismatch. I'll make sure to keep my tooling in sync going forward. |
| - `maxTokens`: Maximum number of tokens to generate. | ||
| - `stopSequences`: List of stop sequences to stop generation. | ||
| - `temperature`: Sampling temperature. | ||
| - `topP`: Nucleus sampling parameter. |
There was a problem hiding this comment.
we can add outputConfig here
| - `maxTokens`: Maximum number of tokens to generate. | ||
| - `stopSequences`: List of stop sequences to stop generation. | ||
| - `temperature`: Sampling temperature. | ||
| - `topP`: Nucleus sampling parameter. |
There was a problem hiding this comment.
we can add outputConfig here
| - `maxTokens`: Maximum number of tokens to generate. | ||
| - `stopSequences`: List of stop sequences to stop generation. | ||
| - `temperature`: Sampling temperature. | ||
| - `topP`: Nucleus sampling parameter. |
There was a problem hiding this comment.
we can add outputConfig here
|
I have added |
|
Did you run this with AWS credentials and see it running? I'm trying it locally, and I'm having an error: dependencies installed code snippet from haystack_integrations.components.generators.amazon_bedrock import AmazonBedrockChatGenerator
from haystack.dataclasses import ChatMessage
generator = AmazonBedrockChatGenerator(model="global.anthropic.claude-sonnet-4-6")
messages = [
ChatMessage.from_system("You are a helpful assistant that answers question in Spanish only"),
ChatMessage.from_user("What's Natural Language Processing? Be brief.")
]
response = generator.run(messages, generation_kwargs={"outputConfig": {"textFormat": "json"}}) |
outputConfig to AmazonBedrockChatGenerator
|
Thanks for testing this and sharing the example, I didn't test this against the actual AWS API, and I can see now that |
|
Do you have a way to test this yourself on your side? I'm running it and I now get this error: |
|
I haven’t been able to run it against AWS credentials on my side yet, so I missed this validation issue. From your error, it looks like outputConfig is not supported by the Converse API (even via additionalModelRequestFields). I’ll remove it from the request parameters. |
|
I will close this PR and continue on this one #3108. The issue is that we need to run the integration tests in the CI. I've also fixed the issue; it's related to an older version of boto3. The structured output is only supported on 1.42 onwards, which pinning it also brings some dependency conflicts. In any case, I've kept your commits. |
Related Issues
AmazonBedrockChatGenerator#3067Proposed Changes
This PR adds structured output support to
AmazonBedrockChatGeneratorby introducing handling for theoutputConfigparameter in requests to the Amazon Bedrock Converse API.What was the issue?
Previously, any structured output configuration (e.g.
outputConfig.textFormat) passed viageneration_kwargswould be incorrectly forwarded underadditionalModelRequestFields. However, according to AWS Bedrock API requirements,outputConfigmust be provided as a top-level parameter in the request.What was changed?
Extracted
outputConfigfromgeneration_kwargsEnsured it is removed from
additionalModelRequestFieldsAdded it correctly to the request payload as:
Result
Users can now pass structured output configurations like:
and have them properly interpreted by the Bedrock Converse API.
How did you test it?
Verified that
outputConfigis:generation_kwargsadditionalModelRequestFieldsManually validated request construction logic against AWS documentation
Ensured no regressions in existing parameter handling (
inferenceConfig,toolConfig, etc.)Notes for the reviewer
_prepare_request_params)Checklist
feat:) for the PR title