Add ComfyUI Pipeline Memory Barrier node#2836
Open
brosequist wants to merge 1 commit intoComfy-Org:mainfrom
Open
Add ComfyUI Pipeline Memory Barrier node#2836brosequist wants to merge 1 commit intoComfy-Org:mainfrom
brosequist wants to merge 1 commit intoComfy-Org:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
New custom node: ComfyUI Pipeline Memory Barrier
Repo: https://github.com/brosequist/ComfyUI-PipelineBarrier
What it does
Inserts an explicit GPU cache flush between pipeline stages in a ComfyUI workflow. The node calls:
comfy.model_management.soft_empty_cache(force=True)gc.collect()torch.cuda.empty_cache()+torch.cuda.synchronize()on every visible GPUthen logs GPU allocated/reserved and system RAM before and after.
Why it's needed
Multi-stage workflows (e.g. WanVideo two-pass I2V with ComfyUI-MultiGPU and BlockSwap) can OOM-kill the process at the start of the next stage even after model offload completes. PyTorch's CUDA allocator caches freed blocks rather than returning them to the OS immediately. This node flushes those caches so the OS sees the free memory before the next stage loads its models.
Usage
Wire it on any latent (or any typed) connection between pipeline stages. The input passes through unchanged. Works with any wire type via the
_AnyTypesentinel.No additional pip dependencies (uses
psutilwhich is already in the base ComfyUI environment).