You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat!: switch default hub model to FLUX.2 klein 9B (Q5_K_M GGUF)
BREAKING CHANGE: The default hub deployment now uses FLUX.2 klein 9B
instead of FLUX.1-dev-fp8. Existing workflows using CheckpointLoaderSimple
with flux1-dev-fp8.safetensors will not work with the new default image.
Use the flux1-dev-fp8 image tag for backward compatibility.
- Install ComfyUI-GGUF custom node in Dockerfile base stage
- Add flux2-klein-9b model download block (GGUF + text encoder + VAE)
- Change default MODEL_TYPE from flux1-dev-fp8 to flux2-klein-9b
- Add docker-bake.hcl target and CI/CD workflow entries
- Update hub config (description + 25GB container disk)
- Add new workflow files for FLUX.2 klein 9B GGUF
- Update documentation across README, hub README, and deployment guide
- Add diffusion_models + text_encoders to extra_model_paths.yaml
Closes#217
Copy file name to clipboardExpand all lines: .runpod/README.md
+48-58Lines changed: 48 additions & 58 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,11 +12,11 @@ Run [ComfyUI](https://github.com/comfyanonymous/ComfyUI) workflows as a serverle
12
12
13
13
## What is included?
14
14
15
-
This worker comes with the **FLUX.1-dev-fp8** (`flux1-dev-fp8.safetensors`) model pre-installed and **works only with this specific model** when deployed from the hub.
15
+
This worker comes with the **FLUX.2 klein 9B** (`flux-2-klein-9b-Q5_K_M.gguf`) model pre-installed and **works only with this specific model** when deployed from the hub.
16
16
17
17
## Want to use a different model?
18
18
19
-
**The hub deployment only supports FLUX.1-dev-fp8.** If you need any other model, custom nodes, or LoRAs, you have two options:
19
+
**The hub deployment only supports FLUX.2 klein 9B (Q5_K_M GGUF).** If you need any other model, custom nodes, or LoRAs, you have two options:
@@ -60,104 +60,94 @@ This example uses a simplified workflow (replace with your actual workflow JSON)
60
60
{
61
61
"input": {
62
62
"workflow": {
63
-
"6": {
63
+
"1": {
64
64
"inputs": {
65
-
"text": "anime cat with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a construction outfit placing a fancy black forest cake with candles on top of a dinner table of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere there are paintings on the walls",
66
-
"clip": ["30", 1]
65
+
"unet_name": "flux-2-klein-9b-Q5_K_M.gguf"
67
66
},
68
-
"class_type": "CLIPTextEncode",
67
+
"class_type": "UnetLoaderGGUF",
69
68
"_meta": {
70
-
"title": "CLIP Text Encode (Positive Prompt)"
69
+
"title": "Load GGUF Model"
71
70
}
72
71
},
73
-
"8": {
72
+
"2": {
74
73
"inputs": {
75
-
"samples": ["31", 0],
76
-
"vae": ["30", 2]
74
+
"clip_name": "qwen_3_8b_fp8mixed.safetensors",
75
+
"type": "flux2"
77
76
},
78
-
"class_type": "VAEDecode",
77
+
"class_type": "CLIPLoader",
79
78
"_meta": {
80
-
"title": "VAE Decode"
79
+
"title": "Load Text Encoder"
81
80
}
82
81
},
83
-
"9": {
82
+
"3": {
84
83
"inputs": {
85
-
"filename_prefix": "ComfyUI",
86
-
"images": ["8", 0]
84
+
"vae_name": "flux2-vae.safetensors"
87
85
},
88
-
"class_type": "SaveImage",
86
+
"class_type": "VAELoader",
89
87
"_meta": {
90
-
"title": "Save Image"
88
+
"title": "Load VAE"
91
89
}
92
90
},
93
-
"27": {
91
+
"4": {
94
92
"inputs": {
95
-
"width": 512,
96
-
"height": 512,
97
-
"batch_size": 1
93
+
"text": "anime cat with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a construction outfit placing a fancy black forest cake with candles on top of a dinner table of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere there are paintings on the walls",
0 commit comments