Add ComfyUI-PipelineBarrier to custom node list#2839
Open
brosequist wants to merge 1 commit intoComfy-Org:mainfrom
Open
Add ComfyUI-PipelineBarrier to custom node list#2839brosequist wants to merge 1 commit intoComfy-Org:mainfrom
brosequist wants to merge 1 commit intoComfy-Org:mainfrom
Conversation
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
New Node: ComfyUI-PipelineBarrier
Repository: https://github.com/brosequist/ComfyUI-PipelineBarrier
Description: A single node that flushes PyTorch's GPU memory cache and runs garbage collection between pipeline stages, preventing OOM errors in multi-stage workflows such as two-pass video generation.
What it does
Calls
soft_empty_cache(force=True),gc.collect(),torch.cuda.empty_cache(), andtorch.cuda.synchronize()on every visible GPU. Accepts a LATENT input and returns it unchanged — it is a pure synchronization point with no effect on tensor values.Why it's useful
Long multi-stage workflows (e.g. WanVideo two-pass I2V with BlockSwap) can OOM at the start of a second stage even after a model has been offloaded, because PyTorch's allocator caches freed blocks rather than returning them to the OS immediately. Placing this node between stages ensures caches are flushed before the next stage begins.
Checklist
NODE_CLASS_MAPPINGSandNODE_DISPLAY_NAME_MAPPINGSexported from__init__.py