Conversation
This comment was marked as spam.
This comment was marked as spam.
There was a problem hiding this comment.
this is not a valid code change, this is just instructions (which contains bogus code changes most likely an LLM)
assuming that it dose works on old GPUs (which I cannot verify) the proper way make a code change to add support would be to write some auto detection based on CUDA GPU Compute Capability and then dynamically switch which torch version it auto installs
- similar example #16972
on the other hand if someone wants to share instructions on how to use it on old GPUs, they should use custom TORCH_COMMAND TORCH_INDEX_URL environment variables which is the intended way of letting people configure custom torch version, not editing the code and breaking it for others
if you really want to make a PR for this, make a new PR auto with detection code without unnecessary changes
Description
This PR fixes CUDA initialization issues on Maxwell GPUs by enforcing the correct PyTorch version (1.12.1+cu113) and adjusting requirements accordingly.
Also includes updated README instructions.
In
modules/launch_utils.py, the following change was made:Original lines (380-382):
were replaced with:
Screenshots/videos:
Checklist: