Make GPU transforms more memory efficient#887
Conversation
|
This change is part of the following stack: Change managed by git-spice. |
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
There was a problem hiding this comment.
Code Review
This pull request introduces memory-efficient chunked processing for TorchQuantileTransformer and TorchTruncatedSVD to bound peak memory usage during fitting and transformation. It also adds support for randomized SVD in TorchTruncatedSVD and implements a CPU fallback for the MPS backend to handle unsupported SVD operations. Review feedback suggests optimizing memory usage by replacing tensor allocations with Python scalars in torch.where calls and removing redundant tensor expansions.
|
if you're maxed out then totally fine to keep it like this, but did you consider using |
|
Yep, using it in torch svd now to reduce line count! :) |
Uh oh!
There was an error while loading. Please reload this page.