Skip to content

Commit 4f765c2

Browse files
committed
fix(evaluate): Remove ModelPackageConfig from EvaluateBaseModel steps
When evaluate_base_model=True, the EvaluateBaseModel step in both DETERMINISTIC_TEMPLATE and CUSTOM_SCORER_TEMPLATE incorrectly included ModelPackageConfig with SourceModelPackageArn, causing the base model evaluation to load fine-tuned model weights instead of using only the base model from the public hub. This made both evaluations identical, leading users to believe fine-tuning had no effect. Remove ModelPackageConfig from the EvaluateBaseModel step in both templates so it only uses BaseModelArn from ServerlessJobConfig. The EvaluateCustomModel step retains ModelPackageConfig to correctly load fine-tuned weights. This is consistent with the fix already applied to the LLMAJ_TEMPLATE. --- X-AI-Prompt: Fix BenchMarkEvaluator evaluate_base_model bug from D406780217 X-AI-Tool: Kiro sim: https://t.corp.amazon.com/D406780217
1 parent 6a1ba54 commit 4f765c2

File tree

1 file changed

+0
-8
lines changed

1 file changed

+0
-8
lines changed

sagemaker-train/src/sagemaker/train/evaluate/pipeline_templates.py

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -94,10 +94,6 @@
9494
"Type": "Training",
9595
"Arguments": {
9696
"RoleArn": "{{ role_arn }}",
97-
"ModelPackageConfig": {
98-
"ModelPackageGroupArn": "{{ model_package_group_arn }}",
99-
"SourceModelPackageArn": "{{ source_model_package_arn }}"
100-
},
10197
"ServerlessJobConfig": {
10298
"BaseModelArn": "{{ base_model_arn }}",
10399
"AcceptEula": true,
@@ -614,10 +610,6 @@
614610
"Type": "Training",
615611
"Arguments": {
616612
"RoleArn": "{{ role_arn }}",
617-
"ModelPackageConfig": {
618-
"ModelPackageGroupArn": "{{ model_package_group_arn }}",
619-
"SourceModelPackageArn": "{{ source_model_package_arn }}"
620-
},
621613
"ServerlessJobConfig": {
622614
"BaseModelArn": "{{ base_model_arn }}",
623615
"AcceptEula": true,

0 commit comments

Comments
 (0)