Add WaveSpeed text-to-image node (Flux Dev / Schnell)#13574
Add WaveSpeed text-to-image node (Flux Dev / Schnell)#13574onlyforwork2026 wants to merge 2 commits intoComfy-Org:masterfrom
Conversation
Adds a new `WavespeedTextToImageNode` that generates images via WaveSpeed's fast Flux inference API (flux-dev, flux-dev-fp8, flux-schnell, flux-schnell-fp8). Also adds the corresponding `WavespeedTextToImageRequest` Pydantic model to the wavespeed API module. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds a new 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
comfy_api_nodes/nodes_wavespeed.py (1)
176-184: Optional: the_MODEL_ENDPOINTmapping is justmodel.split("/", 1)[1].Every key/value pair is
"wavespeed-ai/<x>" → "<x>". You could drop the dict (and one source of drift if a new model is added) by deriving the endpoint inline. Same goes for_SCHNELL_MODELSif you prefer a string check ("schnell" in model), though the explicit set is arguably clearer — feel free to keep it.♻️ Possible simplification
-_MODEL_ENDPOINT = { - "wavespeed-ai/flux-dev": "flux-dev", - "wavespeed-ai/flux-dev-fp8": "flux-dev-fp8", - "wavespeed-ai/flux-schnell": "flux-schnell", - "wavespeed-ai/flux-schnell-fp8": "flux-schnell-fp8", -} - _SCHNELL_MODELS = {"wavespeed-ai/flux-schnell", "wavespeed-ai/flux-schnell-fp8"}- endpoint_name = _MODEL_ENDPOINT[model] is_schnell = model in _SCHNELL_MODELS initial_res = await sync_op( cls, ApiEndpoint( - path=f"/proxy/wavespeed/api/v3/wavespeed-ai/{endpoint_name}", + path=f"/proxy/wavespeed/api/v3/{model}", method="POST", ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy_api_nodes/nodes_wavespeed.py` around lines 176 - 184, The _MODEL_ENDPOINT dict is redundant because every key maps to the suffix after "wavespeed-ai/"; replace uses of _MODEL_ENDPOINT with a small helper or inline derivation like endpoint = model.split("/", 1)[1] (or model.partition("/")[2]) wherever the mapping is read, and remove the _MODEL_ENDPOINT constant; likewise, instead of the _SCHNELL_MODELS set you can either keep it or replace checks like model in _SCHNELL_MODELS with a substring test (e.g., "schnell" in model) in the functions that branch on schnell models (reference symbols: _MODEL_ENDPOINT, _SCHNELL_MODELS, and the request/selection code in nodes_wavespeed.py that currently looks up endpoints and checks for schnell models).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy_api_nodes/nodes_wavespeed.py`:
- Around line 234-241: The seed input's tooltip allows -1 but the widget sets
min=0 for IO.Int.Input("seed", ...), so make the widget accept -1 and handle it:
change min to -1 on IO.Int.Input("seed", default=0, min=-1, ...), and in the
code path that uses the seed value (where the node packages or sends the "seed"
parameter) detect seed == -1 and replace it with a generated random 64-bit value
before sending to the Wavespeed API; alternatively, if you prefer the simpler
fix, update the tooltip text on IO.Int.Input("seed", ...) to remove the "Use -1
for random" wording so it matches min=0.
---
Nitpick comments:
In `@comfy_api_nodes/nodes_wavespeed.py`:
- Around line 176-184: The _MODEL_ENDPOINT dict is redundant because every key
maps to the suffix after "wavespeed-ai/"; replace uses of _MODEL_ENDPOINT with a
small helper or inline derivation like endpoint = model.split("/", 1)[1] (or
model.partition("/")[2]) wherever the mapping is read, and remove the
_MODEL_ENDPOINT constant; likewise, instead of the _SCHNELL_MODELS set you can
either keep it or replace checks like model in _SCHNELL_MODELS with a substring
test (e.g., "schnell" in model) in the functions that branch on schnell models
(reference symbols: _MODEL_ENDPOINT, _SCHNELL_MODELS, and the request/selection
code in nodes_wavespeed.py that currently looks up endpoints and checks for
schnell models).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3732544d-be26-4436-b943-23e29b610c3c
⛔ Files ignored due to path filters (1)
comfy_api_nodes/apis/wavespeed.pyis excluded by!comfy_api_nodes/apis/**
📒 Files selected for processing (1)
comfy_api_nodes/nodes_wavespeed.py
| IO.Int.Input( | ||
| "seed", | ||
| default=0, | ||
| min=0, | ||
| max=0xFFFFFFFFFFFFFFFF, | ||
| control_after_generate=True, | ||
| tooltip="Seed for reproducibility. Use -1 for random.", | ||
| ), |
There was a problem hiding this comment.
Seed tooltip contradicts the min value.
The tooltip says “Use -1 for random,” but min=0, so -1 is not actually accepted by the input widget. Either lower min to -1 (and translate -1 to a randomized seed before sending), or fix the tooltip to match the real range.
✏️ Suggested tooltip fix (simplest)
IO.Int.Input(
"seed",
default=0,
min=0,
max=0xFFFFFFFFFFFFFFFF,
control_after_generate=True,
- tooltip="Seed for reproducibility. Use -1 for random.",
+ tooltip="Seed for reproducibility.",
),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| IO.Int.Input( | |
| "seed", | |
| default=0, | |
| min=0, | |
| max=0xFFFFFFFFFFFFFFFF, | |
| control_after_generate=True, | |
| tooltip="Seed for reproducibility. Use -1 for random.", | |
| ), | |
| IO.Int.Input( | |
| "seed", | |
| default=0, | |
| min=0, | |
| max=0xFFFFFFFFFFFFFFFF, | |
| control_after_generate=True, | |
| tooltip="Seed for reproducibility.", | |
| ), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@comfy_api_nodes/nodes_wavespeed.py` around lines 234 - 241, The seed input's
tooltip allows -1 but the widget sets min=0 for IO.Int.Input("seed", ...), so
make the widget accept -1 and handle it: change min to -1 on
IO.Int.Input("seed", default=0, min=-1, ...), and in the code path that uses the
seed value (where the node packages or sends the "seed" parameter) detect seed
== -1 and replace it with a generated random 64-bit value before sending to the
Wavespeed API; alternatively, if you prefer the simpler fix, update the tooltip
text on IO.Int.Input("seed", ...) to remove the "Use -1 for random" wording so
it matches min=0.
Adds a ComfyUI blueprint that wraps the new WavespeedTextToImageNode, exposing prompt, model, width, and height as user-facing inputs. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@blueprints/Text` to Image (WaveSpeed Flux).json:
- Around line 80-83: The proxyWidgets array contains a stray entry ["1","seed"]
that doesn't match the outer node's declared inputs (prompt, model, width,
height) and is ambiguous because "1" refers to the outer node and also an inner
WavespeedTextToImageNode id; remove the ["1","seed"] entry from proxyWidgets to
keep the proxy mapping consistent with widgets_values and top-level inputs, or
if exposing seed was intentional, add a corresponding top-level input widget
(and a widgets_values entry) and update proxyWidgets to reference the top-level
input id instead of "1".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a8ac5e10-3b52-481a-9f81-6db89c30b713
📒 Files selected for processing (1)
blueprints/Text to Image (WaveSpeed Flux).json
| [ | ||
| "1", | ||
| "seed" | ||
| ] |
There was a problem hiding this comment.
Stray seed entry in proxyWidgets doesn't match the top-level node's inputs.
The outer node only declares four inputs/widgets (prompt, model, width, height) and widgets_values only has four matching entries, but proxyWidgets adds a fifth entry ["1", "seed"]. The PR description also lists only prompt/model/width/height as user-facing on the blueprint. The other four entries use "-1" (the subgraph's input node) while this one uses "1", which is ambiguous — at the outer scope id 1 is this very node (no seed widget on it), and inside the subgraph the inner WavespeedTextToImageNode also happens to have id 1. Looks like a leftover from the editor; consider dropping it (or, if exposing seed at the top level was intentional, also add the corresponding input + widgets_values entry to keep things consistent).
🧹 Suggested cleanup
"proxyWidgets": [
[
"-1",
"prompt"
],
[
"-1",
"model"
],
[
"-1",
"width"
],
[
"-1",
"height"
- ],
- [
- "1",
- "seed"
]
],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@blueprints/Text` to Image (WaveSpeed Flux).json around lines 80 - 83, The
proxyWidgets array contains a stray entry ["1","seed"] that doesn't match the
outer node's declared inputs (prompt, model, width, height) and is ambiguous
because "1" refers to the outer node and also an inner WavespeedTextToImageNode
id; remove the ["1","seed"] entry from proxyWidgets to keep the proxy mapping
consistent with widgets_values and top-level inputs, or if exposing seed was
intentional, add a corresponding top-level input widget (and a widgets_values
entry) and update proxyWidgets to reference the top-level input id instead of
"1".
|
Thank you for your contribution. ComfyUI already supports Flux models via Black Forest Labs' APIs. For any commercial inquiry, please contact us here: https://www.comfy.org/contact |
What changed
Added a new
WavespeedTextToImageNodethat lets users generate images via WaveSpeed's fast Flux inference API.Files changed:
comfy_api_nodes/apis/wavespeed.py— newWavespeedTextToImageRequestPydantic modelcomfy_api_nodes/nodes_wavespeed.py— newWavespeedTextToImageNodeclass, registered inWavespeedExtensionWhy
The existing WaveSpeed nodes only covered upscaling (video and image). WaveSpeed's primary offering is fast Flux inference, so text-to-image was a natural addition.
Details
Supported models:
wavespeed-ai/flux-devwavespeed-ai/flux-dev-fp8wavespeed-ai/flux-schnellwavespeed-ai/flux-schnell-fp8Node inputs: model, prompt, width, height, steps, guidance_scale (advanced), seed, safety_checker (advanced)
Schnell models automatically set
guidance_scale=1.0since they don't use CFG.Pricing: Dev ~$0.003/image, Schnell ~$0.001/image
🤖 Generated with Claude Code
API Node PR Checklist
Scope
Pricing & Billing
If Need pricing update:
QA
Comms