Skip to content

Maxwell Fix#17158

Closed
ghost wants to merge 4 commits into
devfrom
unknown repository
Closed

Maxwell Fix#17158
ghost wants to merge 4 commits into
devfrom
unknown repository

Conversation

@ghost
Copy link
Copy Markdown

@ghost ghost commented Oct 20, 2025

Description

This PR fixes CUDA initialization issues on Maxwell GPUs by enforcing the correct PyTorch version (1.12.1+cu113) and adjusting requirements accordingly.
Also includes updated README instructions.

In modules/launch_utils.py, the following change was made:

Original lines (380-382):

if args.reinstall_torch or not is_installed("torch") or not is_installed("torchvision"):
    run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
    startup_timer.record("install torch")

were replaced with:

    if args.reinstall_torch or not is_installed("torch") or not is_installed("torchvision"):
    # Skipping automatic torch install; using manually installed torch 1.12.1 + cu113
     print("Skipping torch installation. Using existing torch 1.12.1 + cu113.")
    startup_timer.record("install torch")

Screenshots/videos:

pytorchinvenv1 modelloaded1

Checklist:

@ghost ghost requested review from AUTOMATIC1111, catboxanon and w-e-w as code owners October 20, 2025 12:09
@ghost ghost closed this Oct 20, 2025
@ghost ghost reopened this Oct 20, 2025
@jonses759-tech

This comment was marked as spam.

Copy link
Copy Markdown
Collaborator

@w-e-w w-e-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not a valid code change, this is just instructions (which contains bogus code changes most likely an LLM)

assuming that it dose works on old GPUs (which I cannot verify) the proper way make a code change to add support would be to write some auto detection based on CUDA GPU Compute Capability and then dynamically switch which torch version it auto installs

on the other hand if someone wants to share instructions on how to use it on old GPUs, they should use custom TORCH_COMMAND TORCH_INDEX_URL environment variables which is the intended way of letting people configure custom torch version, not editing the code and breaking it for others


if you really want to make a PR for this, make a new PR auto with detection code without unnecessary changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants