You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TeaCache has now been integrated into ComfyUI and is compatible with the ComfyUI native nodes. ComfyUI-TeaCache is easy to use, simply connect the TeaCache node with the ComfyUI native nodes for seamless usage.
7
7
8
8
## Updates
9
+
- Jun 15 2025: ComfyUI-TeaCache supports HiDream-I1-Dev and Lumina-Image-2.0, adds cache_device option:
10
+
- It can achieve a 1.5x lossless speedup and a 2x speedup without much visual quality degradation for HiDream-I1-Dev.
11
+
- Support HiDream-I1-Dev LoRA!
12
+
- It can achieve a 1.5x lossless speedup and a 1.7x speedup without much visual quality degradation for Lumina-Image-2.0.
13
+
- Support Lumina-Image-2.0 LoRA!
14
+
- Add cache_device option according to the feedback from [3](https://github.com/welltop-cn/ComfyUI-TeaCache/issues/74), [4](https://github.com/welltop-cn/ComfyUI-TeaCache/issues/104) and [5](https://github.com/welltop-cn/ComfyUI-TeaCache/issues/143).
9
15
- May 22 2025: ComfyUI-TeaCache supports HiDream-I1-Full and redesigns TeaCache options:
10
16
- It can achieve a 1.5x lossless speedup and a 2x speedup without much visual quality degradation.
11
17
- Support HiDream-I1-Full LoRA!
@@ -62,7 +68,9 @@ To use TeaCache node, simply add `TeaCache` node to your workflow after `Load Di
@@ -79,7 +87,9 @@ To use TeaCache node, simply add `TeaCache` node to your workflow after `Load Di
79
87
80
88
If the image/video after applying TeaCache is of low quality, please reduce rel_l1_thresh. I really don't recommend adjusting start_percent and end_percent unless you are an experienced engineer or creator.
81
89
82
-
The demo workflows ([flux](./examples/flux.json), [pulid_flux](./examples/pulid_flux.json), [hidream_i1_full](./examples/hidream_i1_full.json), [hunyuanvideo](./examples/hunyuanvideo.json), [ltx_video](./examples/ltx_video.json), [cogvideox](./examples/cogvideox.json), [wan2.1_t2v](./examples/wan2.1_t2v.json) and [wan2.1_i2v](./examples/wan2.1_i2v.json)) are placed in examples folder.
90
+
If you have enough VRAM, please select `cuda` in the `cache_device` option, which can bring faster inference, but will increase VRAM slightly. If you have limited VRAM, please select `cpu` in the `cache_device` option, which do not increase VRAM, but will make inference slower slightly.
91
+
92
+
The demo workflows ([flux](./examples/flux.json), [pulid_flux](./examples/pulid_flux.json), [hidream_i1_dev](./examples/hidream_i1_dev.json), [hidream_i1_full](./examples/hidream_i1_full.json), [lumina_image_2](./examples/lumina_image_2.json), [hunyuanvideo](./examples/hunyuanvideo.json), [ltx_video](./examples/ltx_video.json), [cogvideox](./examples/cogvideox.json), [wan2.1_t2v](./examples/wan2.1_t2v.json) and [wan2.1_i2v](./examples/wan2.1_i2v.json)) are placed in examples folder.
83
93
84
94
### Compile Model
85
95
To use Compile Model node, simply add `Compile Model` node to your workflow after `Load Diffusion Model` node or `TeaCache` node. Compile Model uses `torch.compile` to enhance the model performance by compiling model into more efficient intermediate representations (IRs). This compilation process leverages backend compilers to generate optimized code, which can significantly speed up inference. The compilation may take long time when you run the workflow at first, but once it is compiled, inference is extremely fast. The usage is shown below:
@@ -92,9 +102,15 @@ To use Compile Model node, simply add `Compile Model` node to your workflow afte
0 commit comments