feat(api-nodes): add dedicated OpenAI GPT-Image-2 node#13506
feat(api-nodes): add dedicated OpenAI GPT-Image-2 node#13506marawan206 wants to merge 1 commit intomasterfrom
Conversation
- Add `OpenAIGPTImage2` node (`node_id: OpenAIGPTImage2`) with settings specific to gpt-image-2: quality auto/low/medium/high, background auto/opaque (transparent not supported), all 8 popular size presets, and custom width/height inputs (step=16, max=3840) that override the size preset when both are non-zero - Add `_resolve_gpt_image_2_size` helper that enforces API constraints: max edge ≤ 3840px, multiples of 16, ratio ≤ 3:1, total pixels 655,360–8,294,400 - Add `calculate_tokens_price_image_2` using correct gpt-image-2 rates ($8/1M input, $30/1M output); price badge shows range per quality tier with approximate flag for auto quality - Rename `OpenAIGPTImage1` display name to "OpenAI GPT Image 1 & 1.5", remove gpt-image-2 from its model dropdown, and update its price badge to be model-aware with correct per-model ranges - Add unit tests covering price formulas, size resolution logic, and schema correctness for both nodes Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis pull request introduces a new OpenAI GPT-Image-2 node with dedicated pricing calculations and size validation. The existing OpenAI GPT-Image-1 node was updated to remove GPT-Image-2 from its model options and to adjust its pricing display logic. The new OpenAI GPT-Image-2 node supports both image generation and editing workflows, with input validation for custom dimensions (multiples of 16, max 3840 pixels per edge, 3:1 aspect ratio limit) and optional mask uploads for edit operations. Comprehensive test coverage was added to validate pricing calculations, size resolution logic, schema definitions, and execute-time input constraints. 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
comfy_api_nodes/nodes_openai.py (2)
492-492:⚠️ Potential issue | 🟡 Minor
OpenAIGPTImage1.executefallback default formodelis now out of sync with the schema.The schema's model combo default was just flipped to
"gpt-image-1.5"(Line 436), but theexecute(...)signature still carriesmodel: str = "gpt-image-1"as its default. If anything ever callsexecute()without explicitly passingmodel(tests, direct Python invocation, or any future harness that bypasses widget defaults), it silently usesgpt-image-1withcalculate_tokens_price_image_1, contradicting the UI default and its price badge.Trivial fix, and it keeps the two sources of truth in lockstep:
🔧 Proposed alignment
- size: str = "1024x1024", - model: str = "gpt-image-1", + size: str = "1024x1024", + model: str = "gpt-image-1.5",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy_api_nodes/nodes_openai.py` at line 492, The execute method's default model value in OpenAIGPTImage1 is out of sync with the UI/schema; update the function signature for OpenAIGPTImage1.execute to use model: str = "gpt-image-1.5" (matching the schema change at the combo default) and verify any downstream logic that branches on model (e.g., calls to calculate_tokens_price_image_1 or price calculation helpers) uses the correct pricing function or mapping for "gpt-image-1.5" so runtime defaults and price calculations remain consistent.
435-478:⚠️ Potential issue | 🟡 MinorChanging the default model to
gpt-image-1.5is a user-visible behavior change for existing workflows.Any previously-saved workflow that relied on the combo's default (i.e., never explicitly set
model) will now run againstgpt-image-1.5with the accompanying price range, rather than thegpt-image-1they were getting before. That's almost certainly the intent, but it's worth calling out in the PR description / release notes so users aren't surprised by different output characteristics and billing on the first run after upgrading. The price-badge branching at Lines 454–465 is correctly ordered (gpt-image-1.5matched beforegpt-image-1), so the badge itself is fine.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy_api_nodes/nodes_openai.py` around lines 435 - 478, You changed the default for the image model widget (the model combo default in nodes_openai.py — the widget with default="gpt-image-1.5" and the price_badge logic that checks $contains($m, "gpt-image-1.5")), which is a user-facing behavior and billing change for saved workflows that never set model; update the PR description and release notes to explicitly call out that the default model changed from gpt-image-1 to gpt-image-1.5 (and mention the resulting potential differences in outputs and billing due to the price_badge branching), and if you want to avoid surprising users consider adding a migration note or keeping the old default and introducing the new model as an opt-in flag.
🧹 Nitpick comments (3)
comfy_api_nodes/nodes_openai.py (2)
580-589: Module-level constant exposure: intentional?
_GPT_IMAGE_2_SIZESis prefixed with an underscore (signaling "private") but is imported directly by the test module (from comfy_api_nodes.nodes_openai import _GPT_IMAGE_2_SIZES). That's a minor style inconsistency — either drop the underscore if it's part of the stable module surface (tests + anyone who wants the canonical list), or keep it private and have the test assert against the schema'ssize_input.optionsonly (which the test already does at Line 222, making the explicit import somewhat redundant).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy_api_nodes/nodes_openai.py` around lines 580 - 589, The module-level list _GPT_IMAGE_2_SIZES is exposed to tests but named with a leading underscore; either make it part of the public API by renaming it to GPT_IMAGE_2_SIZES (update any references) or keep it private and remove its direct import from tests, having the test assert against the node/schema property (size_input.options) instead; choose one approach and make the corresponding change consistently for the symbol _GPT_IMAGE_2_SIZES and the test that imports it.
592-605: Validation logic reads clean; just a couple of small polish notes.The check ordering (max-edge → multiple-of-16 → ratio → pixel bounds) is sensible and the messages are specific enough for users to fix their inputs. Two tiny things that are easy to defer:
- The non-ASCII
≤in user-facing error strings (Line 597) is fine in most terminals but can render oddly in some Windows consoles / log aggregators;<=would be safer. Same applies to the tooltip strings on Lines 663 and 673.- Negative custom dimensions currently short-circuit to "use preset" via
<= 0(Line 593). That's reasonable because the slider'smin=0makes it unreachable from the UI, but if you'd like to be defensive against programmatic callers, an explicitraise ValueErrorfor negatives wouldn't hurt.Neither is blocking.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy_api_nodes/nodes_openai.py` around lines 592 - 605, In _resolve_gpt_image_2_size, replace the non-ASCII “≤” with the ASCII "<=" in all user-facing ValueError messages (and in the related tooltip strings elsewhere in this module) to avoid rendering issues, and change the initial guard that currently returns size when custom_width or custom_height are <= 0 to instead raise a ValueError for negative dimensions (allow zero to keep the "use preset" behavior) so programmatic callers get explicit errors for negative inputs; update the error text to be clear and consistent with the other validation messages.tests-unit/comfy_api_test/openai_nodes_test.py (1)
229-246:pytest.raises(Exception)is too broad — it will happily swallowImportErrororTypeErrorfrom a future refactor.Two of the three async tests use a bare
Exception. Narrowing toValueError(which is what_resolve_gpt_image_2_sizeand the mask check actually raise) keeps these tests honest if someone changes the surrounding machinery.♻️ Suggested narrowing
-@pytest.mark.asyncio -async def test_execute_raises_invalid_custom_size(): - with pytest.raises(ValueError): - await OpenAIGPTImage2.execute(prompt="test", custom_width=4096, custom_height=1024) +@pytest.mark.asyncio +async def test_execute_raises_invalid_custom_size(): + with pytest.raises(ValueError, match="3840"): + await OpenAIGPTImage2.execute(prompt="test", custom_width=4096, custom_height=1024)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-unit/comfy_api_test/openai_nodes_test.py` around lines 229 - 246, The test uses a broad pytest.raises(Exception) which can hide unrelated errors; update the failing-prompt test to expect ValueError instead: change the pytest.raises(Exception) in test_execute_raises_on_empty_prompt to pytest.raises(ValueError) so OpenAIGPTImage2.execute(prompt=" ") is asserted to raise the specific ValueError raised by the input validation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy_api_nodes/nodes_openai.py`:
- Around line 713-738: The price-badge JQ expression in IO.PriceBadge uses a
very wide fallback range [0.005, 0.250] when $lookup($ranges, widgets.quality)
is falsy (e.g., quality == "auto"); tighten this by either mapping "auto"
explicitly to a specific bucket (e.g., "medium") or replacing the fallback
bounds with a narrower, product-approved default (for example set $lo/$hi to
something closer to the existing medium/high ranges) and keep approximate: true;
update the expr in the IO.PriceBadge (the $ranges/$lookup/$range/$lo/$hi logic)
so new default behavior applies when widgets.quality is not found.
- Around line 741-832: OpenAIGPTImage2.execute currently accepts seed but never
forwards it in the request payload; update the calls that construct
OpenAIImageEditRequest and OpenAIImageGenerationRequest inside
OpenAIGPTImage2.execute to include seed=seed (matching how
OpenAIGPTImage1.execute forwards seed), or if the backend truly doesn't support
seeding for gpt-image-2, remove the seed parameter from the node schema and
function signature instead; locate the two request constructions named
OpenAIImageEditRequest(...) and OpenAIImageGenerationRequest(...) in
OpenAIGPTImage2.execute and either add seed=seed to both or delete/disable the
seed input upstream.
In `@tests-unit/comfy_api_test/openai_nodes_test.py`:
- Around line 229-232: The test currently passes for the wrong reason because
OpenAIGPTImage2.execute uses validate_string(prompt, strip_whitespace=False) so
a whitespace-only prompt reaches _resolve_gpt_image_2_size and then fails later
in sync_op; fix the test to exercise validation directly by using a truly empty
prompt (prompt="") or call validate_string with strip_whitespace=True, and
assert the specific exception/message you expect (e.g., ValueError or custom
error) rather than a broad pytest.raises(Exception); update the test to
reference OpenAIGPTImage2.execute and assert the precise error text to avoid
fragile reliance on downstream failures from sync_op.
---
Outside diff comments:
In `@comfy_api_nodes/nodes_openai.py`:
- Line 492: The execute method's default model value in OpenAIGPTImage1 is out
of sync with the UI/schema; update the function signature for
OpenAIGPTImage1.execute to use model: str = "gpt-image-1.5" (matching the schema
change at the combo default) and verify any downstream logic that branches on
model (e.g., calls to calculate_tokens_price_image_1 or price calculation
helpers) uses the correct pricing function or mapping for "gpt-image-1.5" so
runtime defaults and price calculations remain consistent.
- Around line 435-478: You changed the default for the image model widget (the
model combo default in nodes_openai.py — the widget with default="gpt-image-1.5"
and the price_badge logic that checks $contains($m, "gpt-image-1.5")), which is
a user-facing behavior and billing change for saved workflows that never set
model; update the PR description and release notes to explicitly call out that
the default model changed from gpt-image-1 to gpt-image-1.5 (and mention the
resulting potential differences in outputs and billing due to the price_badge
branching), and if you want to avoid surprising users consider adding a
migration note or keeping the old default and introducing the new model as an
opt-in flag.
---
Nitpick comments:
In `@comfy_api_nodes/nodes_openai.py`:
- Around line 580-589: The module-level list _GPT_IMAGE_2_SIZES is exposed to
tests but named with a leading underscore; either make it part of the public API
by renaming it to GPT_IMAGE_2_SIZES (update any references) or keep it private
and remove its direct import from tests, having the test assert against the
node/schema property (size_input.options) instead; choose one approach and make
the corresponding change consistently for the symbol _GPT_IMAGE_2_SIZES and the
test that imports it.
- Around line 592-605: In _resolve_gpt_image_2_size, replace the non-ASCII “≤”
with the ASCII "<=" in all user-facing ValueError messages (and in the related
tooltip strings elsewhere in this module) to avoid rendering issues, and change
the initial guard that currently returns size when custom_width or custom_height
are <= 0 to instead raise a ValueError for negative dimensions (allow zero to
keep the "use preset" behavior) so programmatic callers get explicit errors for
negative inputs; update the error text to be clear and consistent with the other
validation messages.
In `@tests-unit/comfy_api_test/openai_nodes_test.py`:
- Around line 229-246: The test uses a broad pytest.raises(Exception) which can
hide unrelated errors; update the failing-prompt test to expect ValueError
instead: change the pytest.raises(Exception) in
test_execute_raises_on_empty_prompt to pytest.raises(ValueError) so
OpenAIGPTImage2.execute(prompt=" ") is asserted to raise the specific
ValueError raised by the input validation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c4b88aa5-d4b9-400e-9866-2e01847d3ee3
📒 Files selected for processing (2)
comfy_api_nodes/nodes_openai.pytests-unit/comfy_api_test/openai_nodes_test.py
| price_badge=IO.PriceBadge( | ||
| depends_on=IO.PriceBadgeDepends(widgets=["quality", "num_images"]), | ||
| expr=""" | ||
| ( | ||
| $ranges := { | ||
| "low": [0.005, 0.010], | ||
| "medium": [0.041, 0.060], | ||
| "high": [0.165, 0.250] | ||
| }; | ||
| $q := widgets.quality; | ||
| $n := widgets.num_images; | ||
| $n := ($n != null and $n != 0) ? $n : 1; | ||
| $range := $lookup($ranges, $q); | ||
| $lo := $range ? $range[0] : 0.005; | ||
| $hi := $range ? $range[1] : 0.250; | ||
| ($n = 1) | ||
| ? {"type":"range_usd","min_usd": $lo, "max_usd": $hi, "format": {"approximate": ($range ? false : true)}} | ||
| : { | ||
| "type":"range_usd", | ||
| "min_usd": $lo, | ||
| "max_usd": $hi, | ||
| "format": {"approximate": ($range ? false : true), "suffix": " x " & $string($n) & "/Run"} | ||
| } | ||
| ) | ||
| """, | ||
| ), |
There was a problem hiding this comment.
Price badge fallback range is suspiciously wide — worth a quick sanity check.
When quality == "auto", $lookup($ranges, $q) is falsy, so the badge falls back to [0.005, 0.250] with approximate: true (Lines 726–727). That spans a 50× ratio, which is technically honest ("we don't know until the model decides") but potentially alarming in the UI. If product is OK with that, great — this is just a heads-up that it'll render as "~$0.005–$0.250" by default on a fresh node drop, since "auto" is the default quality (Line 637).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@comfy_api_nodes/nodes_openai.py` around lines 713 - 738, The price-badge JQ
expression in IO.PriceBadge uses a very wide fallback range [0.005, 0.250] when
$lookup($ranges, widgets.quality) is falsy (e.g., quality == "auto"); tighten
this by either mapping "auto" explicitly to a specific bucket (e.g., "medium")
or replacing the fallback bounds with a narrower, product-approved default (for
example set $lo/$hi to something closer to the existing medium/high ranges) and
keep approximate: true; update the expr in the IO.PriceBadge (the
$ranges/$lookup/$range/$lo/$hi logic) so new default behavior applies when
widgets.quality is not found.
| @classmethod | ||
| async def execute( | ||
| cls, | ||
| prompt: str, | ||
| seed: int = 0, | ||
| quality: str = "auto", | ||
| background: str = "auto", | ||
| image: Input.Image | None = None, | ||
| mask: Input.Image | None = None, | ||
| num_images: int = 1, | ||
| size: str = "auto", | ||
| custom_width: int = 0, | ||
| custom_height: int = 0, | ||
| model: str = "gpt-image-2", | ||
| ) -> IO.NodeOutput: | ||
| validate_string(prompt, strip_whitespace=False) | ||
|
|
||
| if mask is not None and image is None: | ||
| raise ValueError("Cannot use a mask without an input image") | ||
|
|
||
| resolved_size = _resolve_gpt_image_2_size(size, custom_width, custom_height) | ||
|
|
||
| if image is not None: | ||
| files = [] | ||
| batch_size = image.shape[0] | ||
| for i in range(batch_size): | ||
| single_image = image[i : i + 1] | ||
| scaled_image = downscale_image_tensor(single_image, total_pixels=2048 * 2048).squeeze() | ||
|
|
||
| image_np = (scaled_image.numpy() * 255).astype(np.uint8) | ||
| img = Image.fromarray(image_np) | ||
| img_byte_arr = BytesIO() | ||
| img.save(img_byte_arr, format="PNG") | ||
| img_byte_arr.seek(0) | ||
|
|
||
| if batch_size == 1: | ||
| files.append(("image", (f"image_{i}.png", img_byte_arr, "image/png"))) | ||
| else: | ||
| files.append(("image[]", (f"image_{i}.png", img_byte_arr, "image/png"))) | ||
|
|
||
| if mask is not None: | ||
| if image.shape[0] != 1: | ||
| raise Exception("Cannot use a mask with multiple image") | ||
| if mask.shape[1:] != image.shape[1:-1]: | ||
| raise Exception("Mask and Image must be the same size") | ||
| _, height, width = mask.shape | ||
| rgba_mask = torch.zeros(height, width, 4, device="cpu") | ||
| rgba_mask[:, :, 3] = 1 - mask.squeeze().cpu() | ||
|
|
||
| scaled_mask = downscale_image_tensor(rgba_mask.unsqueeze(0), total_pixels=2048 * 2048).squeeze() | ||
|
|
||
| mask_np = (scaled_mask.numpy() * 255).astype(np.uint8) | ||
| mask_img = Image.fromarray(mask_np) | ||
| mask_img_byte_arr = BytesIO() | ||
| mask_img.save(mask_img_byte_arr, format="PNG") | ||
| mask_img_byte_arr.seek(0) | ||
| files.append(("mask", ("mask.png", mask_img_byte_arr, "image/png"))) | ||
|
|
||
| response = await sync_op( | ||
| cls, | ||
| ApiEndpoint(path="/proxy/openai/images/edits", method="POST"), | ||
| response_model=OpenAIImageGenerationResponse, | ||
| data=OpenAIImageEditRequest( | ||
| model=model, | ||
| prompt=prompt, | ||
| quality=quality, | ||
| background=background, | ||
| n=num_images, | ||
| size=resolved_size, | ||
| moderation="low", | ||
| ), | ||
| content_type="multipart/form-data", | ||
| files=files, | ||
| price_extractor=calculate_tokens_price_image_2, | ||
| ) | ||
| else: | ||
| response = await sync_op( | ||
| cls, | ||
| ApiEndpoint(path="/proxy/openai/images/generations", method="POST"), | ||
| response_model=OpenAIImageGenerationResponse, | ||
| data=OpenAIImageGenerationRequest( | ||
| model=model, | ||
| prompt=prompt, | ||
| quality=quality, | ||
| background=background, | ||
| n=num_images, | ||
| size=resolved_size, | ||
| moderation="low", | ||
| ), | ||
| price_extractor=calculate_tokens_price_image_2, | ||
| ) | ||
| return IO.NodeOutput(await validate_and_cast_response(response)) |
There was a problem hiding this comment.
seed is accepted by the node but silently dropped on the wire.
OpenAIGPTImage2.execute declares seed: int = 0 and the schema exposes a seed input with control_after_generate=True, but neither OpenAIImageEditRequest(...) at Lines 803–811 nor OpenAIImageGenerationRequest(...) at Lines 821–829 forwards seed=seed. Compare with OpenAIGPTImage1.execute, which does pass seed=seed in both branches (Lines 552 and 571). Users wiring a deterministic seed control into this node will quietly get non-reproducible results without any error.
If the backend genuinely doesn't accept seed for gpt-image-2 yet, please drop the input from the schema rather than leaving a ghost widget. Otherwise, plumb it through like the Image 1 node does:
🔧 Proposed forwarding
data=OpenAIImageEditRequest(
model=model,
prompt=prompt,
quality=quality,
background=background,
n=num_images,
+ seed=seed,
size=resolved_size,
moderation="low",
), data=OpenAIImageGenerationRequest(
model=model,
prompt=prompt,
quality=quality,
background=background,
n=num_images,
+ seed=seed,
size=resolved_size,
moderation="low",
),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@comfy_api_nodes/nodes_openai.py` around lines 741 - 832,
OpenAIGPTImage2.execute currently accepts seed but never forwards it in the
request payload; update the calls that construct OpenAIImageEditRequest and
OpenAIImageGenerationRequest inside OpenAIGPTImage2.execute to include seed=seed
(matching how OpenAIGPTImage1.execute forwards seed), or if the backend truly
doesn't support seeding for gpt-image-2, remove the seed parameter from the node
schema and function signature instead; locate the two request constructions
named OpenAIImageEditRequest(...) and OpenAIImageGenerationRequest(...) in
OpenAIGPTImage2.execute and either add seed=seed to both or delete/disable the
seed input upstream.
| @pytest.mark.asyncio | ||
| async def test_execute_raises_on_empty_prompt(): | ||
| with pytest.raises(Exception): | ||
| await OpenAIGPTImage2.execute(prompt=" ") |
There was a problem hiding this comment.
Whitespace-only prompt test likely passes for the wrong reason.
OpenAIGPTImage2.execute calls validate_string(prompt, strip_whitespace=False), so a whitespace-only " " will not raise from validation. The test still "passes" because execute continues into _resolve_gpt_image_2_size (fine, returns "auto") and then into sync_op(...), which blows up for unrelated reasons (missing hidden auth context / no network) — not because of the empty-prompt check you're trying to assert. That's a fragile guarantee: any refactor that wires the test harness up to a real client could silently hide a regression.
Either tighten the intent by asserting a specific error type/message, or exercise a truly empty prompt with strip-aware validation.
🧪 Suggested tightening
-@pytest.mark.asyncio
-async def test_execute_raises_on_empty_prompt():
- with pytest.raises(Exception):
- await OpenAIGPTImage2.execute(prompt=" ")
+@pytest.mark.asyncio
+async def test_execute_raises_on_empty_prompt():
+ # Today validate_string is called with strip_whitespace=False, so "" is the
+ # only value guaranteed to fail at the validation step.
+ with pytest.raises(Exception):
+ await OpenAIGPTImage2.execute(prompt="")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-unit/comfy_api_test/openai_nodes_test.py` around lines 229 - 232, The
test currently passes for the wrong reason because OpenAIGPTImage2.execute uses
validate_string(prompt, strip_whitespace=False) so a whitespace-only prompt
reaches _resolve_gpt_image_2_size and then fails later in sync_op; fix the test
to exercise validation directly by using a truly empty prompt (prompt="") or
call validate_string with strip_whitespace=True, and assert the specific
exception/message you expect (e.g., ValueError or custom error) rather than a
broad pytest.raises(Exception); update the test to reference
OpenAIGPTImage2.execute and assert the precise error text to avoid fragile
reliance on downstream failures from sync_op.
Summary
OpenAIGPTImage2— dedicated node for gpt-image-2 with its specific capabilities and constraints, separate from the existing gpt-image-1/1.5 nodeauto,1024x1024,1536x1024,1024x1536,2048x2048,2048x1152,3840x2160,2160x3840) plus always-visiblecustom_width/custom_heightinputs (slider step=16, max=3840) that override the size preset when both are non-zeroautoandopaqueonlyautoadded — gpt-image-2 supportsauto/low/medium/high; price badge shows~$0.005–$0.250with approximate flag for autonum_imagesreplacesnfor clarity; lockedmodelwidget showsgpt-image-2so users know which model is runningOpenAIGPTImage1updated — renamed display name to "OpenAI GPT Image 1 & 1.5", removed gpt-image-2 from its model dropdown, price badge now model-aware with correct per-model rangescalculate_tokens_price_image_2— correct gpt-image-2 pricing ($8/1M input, $30/1M output vs gpt-image-1.5's $32/1M output)Size constraint enforcement
step=16max=3840_resolve_gpt_image_2_sizeat generation time_resolve_gpt_image_2_sizeat generation timeAPI Node PR Checklist
Scope
Pricing & Billing
If Need pricing update:
QA
Comms