Skip to content

Add ComfyUI-PipelineBarrier to custom node list#2839

Open
brosequist wants to merge 1 commit intoComfy-Org:mainfrom
brosequist:add/comfyui-pipeline-barrier
Open

Add ComfyUI-PipelineBarrier to custom node list#2839
brosequist wants to merge 1 commit intoComfy-Org:mainfrom
brosequist:add/comfyui-pipeline-barrier

Conversation

@brosequist
Copy link
Copy Markdown

New Node: ComfyUI-PipelineBarrier

Repository: https://github.com/brosequist/ComfyUI-PipelineBarrier

Description: A single node that flushes PyTorch's GPU memory cache and runs garbage collection between pipeline stages, preventing OOM errors in multi-stage workflows such as two-pass video generation.

What it does

Calls soft_empty_cache(force=True), gc.collect(), torch.cuda.empty_cache(), and torch.cuda.synchronize() on every visible GPU. Accepts a LATENT input and returns it unchanged — it is a pure synchronization point with no effect on tensor values.

Why it's useful

Long multi-stage workflows (e.g. WanVideo two-pass I2V with BlockSwap) can OOM at the start of a second stage even after a model has been offloaded, because PyTorch's allocator caches freed blocks rather than returning them to the OS immediately. Placing this node between stages ensures caches are flushed before the next stage begins.

Checklist

  • Node is publicly available on GitHub
  • No pip dependencies beyond what ComfyUI already installs
  • NODE_CLASS_MAPPINGS and NODE_DISPLAY_NAME_MAPPINGS exported from __init__.py

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant