Skip to content

Use StratifiedStandardize for per-task Y standardization in TL (#5194)#5194

Open
hvarfner wants to merge 4 commits into
facebook:mainfrom
hvarfner:export-D102197139
Open

Use StratifiedStandardize for per-task Y standardization in TL (#5194)#5194
hvarfner wants to merge 4 commits into
facebook:mainfrom
hvarfner:export-D102197139

Conversation

@hvarfner
Copy link
Copy Markdown

@hvarfner hvarfner commented Apr 29, 2026

Summary:

Adds per-task outcome standardization to the transfer learning adapter, ensuring each task's observations are standardized independently rather than jointly. Updates the default transform pipeline to use TL-specific outcome transforms.

This removes ambiguity on whether the right transforms have been applied (e.g. QuickBO/warm-starting), where standardization is not performed across, but within experiments.

Differential Revision: D102197139

@meta-cla meta-cla Bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Apr 29, 2026
@meta-codesync
Copy link
Copy Markdown

meta-codesync Bot commented Apr 29, 2026

@hvarfner has exported this pull request. If you are a Meta employee, you can view the originating Diff in D102197139.

hvarfner pushed a commit to hvarfner/Ax that referenced this pull request Apr 29, 2026
…ook#5194)

Summary:

Adds per-task outcome standardization to the transfer learning adapter, ensuring each task's observations are standardized independently rather than jointly. Updates the default transform pipeline to use TL-specific outcome transforms.

This removes ambiguity on whether the right transforms have been applied (e.g. QuickBO/warm-starting), where standardization is not performed across, but within experiments.

Differential Revision: D102197139
@hvarfner hvarfner force-pushed the export-D102197139 branch from 8046f90 to 407478c Compare April 29, 2026 14:56
@meta-codesync meta-codesync Bot changed the title Use StratifiedStandardize for per-task Y standardization in TL Use StratifiedStandardize for per-task Y standardization in TL (#5194) Apr 29, 2026
hvarfner pushed a commit to hvarfner/Ax that referenced this pull request Apr 29, 2026
…ook#5194)

Summary:

Adds per-task outcome standardization to the transfer learning adapter, ensuring each task's observations are standardized independently rather than jointly. Updates the default transform pipeline to use TL-specific outcome transforms.

This removes ambiguity on whether the right transforms have been applied (e.g. QuickBO/warm-starting), where standardization is not performed across, but within experiments.

Differential Revision: D102197139
@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 29, 2026

Codecov Report

❌ Patch coverage is 98.89503% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 96.61%. Comparing base (47defa1) to head (e407627).

Files with missing lines Patch % Lines
ax/adapter/transfer_learning/adapter.py 92.85% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #5194      +/-   ##
==========================================
+ Coverage   96.38%   96.61%   +0.23%     
==========================================
  Files         617      617              
  Lines       69579    69638      +59     
==========================================
+ Hits        67065    67284     +219     
+ Misses       2514     2354     -160     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Carl Hvarfner added 4 commits May 12, 2026 13:29
…TaskGP (facebook#5192)

Summary:
X-link: meta-pytorch/botorch#3296


Automatically configures learned feature imputation for models that pad heterogeneous per-task data to the full joint feature space. Models with native heterogeneity support are excluded from this automatic configuration.

Reviewed By: saitcakmak

Differential Revision: D101841497
facebook#5193)

Summary:

Switches the default heterogeneous transfer learning model from a specialized per-task kernel model to a standard multi-task GP with learned feature imputation. The previous default model class is marked as deprecated.

Reviewed By: saitcakmak

Differential Revision: D102197137
…acebook#5200)

Summary:

Adds a `data_parameters` argument to `TorchAdapter._get_fit_args` that decouples SSD construction (model params) from data column extraction (target params). This lets the TL adapter set `_model_space` to include source-only RangeParameters directly, so the SSD naturally covers the full joint feature space -- eliminating the need for the `_expand_ssd_to_joint_space` post-hoc expansion.

Overrides `_set_search_space` to add source-only RangeParameters from the joint search space to `_model_space` while preserving target bounds for shared params (Normalize stays anchored to target bounds). At gen time, `self.parameters` is temporarily swapped to target-only so `extract_search_space_digest` sees only params present in the gen-time search space.

Deletes `_expand_ssd_to_joint_space` (~90 lines).

Differential Revision: D104702983
…ook#5194)

Summary:

Adds per-task outcome standardization to the transfer learning adapter, ensuring each task's observations are standardized independently rather than jointly. Updates the default transform pipeline to use TL-specific outcome transforms.

This removes ambiguity on whether the right transforms have been applied (e.g. QuickBO/warm-starting), where standardization is not performed across, but within experiments.

Differential Revision: D102197139
@hvarfner hvarfner force-pushed the export-D102197139 branch from 407478c to e407627 Compare May 12, 2026 20:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants