Skip to content

Add ComfyUI Pipeline Memory Barrier node#2836

Open
brosequist wants to merge 1 commit intoComfy-Org:mainfrom
brosequist:add-pipeline-barrier
Open

Add ComfyUI Pipeline Memory Barrier node#2836
brosequist wants to merge 1 commit intoComfy-Org:mainfrom
brosequist:add-pipeline-barrier

Conversation

@brosequist
Copy link
Copy Markdown

New custom node: ComfyUI Pipeline Memory Barrier

Repo: https://github.com/brosequist/ComfyUI-PipelineBarrier

What it does

Inserts an explicit GPU cache flush between pipeline stages in a ComfyUI workflow. The node calls:

  1. comfy.model_management.soft_empty_cache(force=True)
  2. gc.collect()
  3. torch.cuda.empty_cache() + torch.cuda.synchronize() on every visible GPU

then logs GPU allocated/reserved and system RAM before and after.

Why it's needed

Multi-stage workflows (e.g. WanVideo two-pass I2V with ComfyUI-MultiGPU and BlockSwap) can OOM-kill the process at the start of the next stage even after model offload completes. PyTorch's CUDA allocator caches freed blocks rather than returning them to the OS immediately. This node flushes those caches so the OS sees the free memory before the next stage loads its models.

Usage

Wire it on any latent (or any typed) connection between pipeline stages. The input passes through unchanged. Works with any wire type via the _AnyType sentinel.

[Pass 1 Sampler] → [Pipeline Memory Barrier] → [Pass 2 Sampler]
[Pass 2 Sampler] → [Pipeline Memory Barrier] → [VAE Decode]

No additional pip dependencies (uses psutil which is already in the base ComfyUI environment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant