Skip to content

feat: Context Windows sampling with LTX2 models and IC-LoRa guides (CORE-3)#13325

Open
drozbay wants to merge 28 commits intoComfy-Org:masterfrom
drozbay:20260322a_ltx_contextwin
Open

feat: Context Windows sampling with LTX2 models and IC-LoRa guides (CORE-3)#13325
drozbay wants to merge 28 commits intoComfy-Org:masterfrom
drozbay:20260322a_ltx_contextwin

Conversation

@drozbay
Copy link
Copy Markdown
Contributor

@drozbay drozbay commented Apr 7, 2026

Enables LTX-2 (LTXAV) multimodal video+audio models to work with context windows, and reworks IC-LoRA guide handling so guides at arbitrary positions get spliced correctly into each window they overlap.

Context windowing now treats every input as potentially multi-modal via a new WindowingState that holds per-modality latents and also the concatenated guide latents if present. A new map_context_window_to_modalities hook on the LTXAV model derives per-modality windows from the primary (video) window. LTXAV's guide-specific slicing (denoise_mask, keyframe_idxs, guide_attention_entries, audio mask) lives on the model via the resize_cond_for_context_window pattern.

Testing and workflows

  1. First-frame / last-frame LTX-2 generation - inplace latent start frame plus a single guide latent at the last frame position.
    Exercises guides at both ends of the clip and the partial-overlap path where a guide only participates in a subset of windows.
LTX2_Stage2_00008_.mp4

droz_LTX-2_ContextWindows_FFLF_Test.json

(Asset) Start and end frames:
FirstFrame_Test
LastFrame_Test

  1. Fully guided LTX-2 video - canny control video spanning the full length of the input. Exercises the full-coverage guide path where every window includes corresponding guide frames. (Note that this is executed in LTX-2 19b because of the need for a basic Canny control IC-Lora)
LTX2_Stage2_00012_.mp4

droz_LTX-2_ContextWindows_CannyTest.json

(Asset) Input Guide Video:

CannyTest.mp4
  1. WanAnimate + context windows - single-modality, no-guide regression check confirming non-multimodal video models still work through the unified windowing pipeline (same WF as Add slice_cond and per-model context window cond resizing #12645)
WanAnimate_00004_.mp4

droz_WanAnimate_ContextWindows_Native_v1.1.json

drozbay added 16 commits April 6, 2026 10:10
…odel. Older LTXV model's guides + context_windows will need to be re-implemented but outside the scope of LTX2 changes
…e variable names, refactor and condense new context window methods to separate execution paths cleanly
@alexisrolland alexisrolland changed the title feat: Context Windows sampling with LTX2 models and IC-LoRa guides feat: Context Windows sampling with LTX2 models and IC-LoRa guides (CORE-3) Apr 8, 2026
drozbay and others added 2 commits April 11, 2026 11:31
@drozbay drozbay marked this pull request as ready for review April 12, 2026 18:30
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 12, 2026

📝 Walkthrough

Walkthrough

Adds multimodal-aware context-windowing: introduces WindowingState and extends context-window classes to produce per-modality windows, slice and reassemble latents, inject and strip guide frames, and adjust model_conds["latent_shapes"]. Refactors context handler and evaluation to operate over per-modality latents with pack/unpack around model calls. Moves FreeNoise handling to a per-modality helper, adds LTXAV mapping/resize helpers, tightens keyframe gating, records latent_start in guide entries, and exposes latent_retain_index_list in nodes.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: Context Windows sampling with LTX2 models and IC-LoRa guides (CORE-3)' directly and specifically summarizes the main changes: enabling context windows for LTX2 multimodal models with IC-LoRA guide support.
Description check ✅ Passed The description provides a clear technical explanation of the changes, including the WindowingState mechanism, per-modality windows, guide handling, and three comprehensive test cases with workflows and assets demonstrating the implementation.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy/context_windows.py`:
- Around line 667-670: The clamp is still running for packed multimodal latents;
change the condition to skip truncation when the handler indicates
packed/multimodal data. Update the if-statement that uses handler.dim,
noise_shape and handler.context_length to require that the handler is NOT a
packed multimodal case (e.g. add "and not getattr(handler, 'packed_multimodal',
False)" or "and not getattr(handler, 'is_packed', False)") before mutating
noise_shape[handler.dim]; this ensures you only min() the value for real
frame-count axes and leave flattened multimodal token axes untouched.

In `@comfy/model_base.py`:
- Around line 1086-1104: The current map_context_window_to_modalities collapses
wrapped/strided video windows by using only len(primary_indices) and
max(primary_indices); instead, map each element of primary_indices individually
into the secondary modality using its relative position within the video
timeline so wrapped/strided geometry is preserved (e.g., for each idx in
primary_indices compute mapped_idx = clamp(round(idx * mod_total / video_total),
0, mod_total-1) and collect those mapped indices in the same order/stride as
primary_indices), then if you need a contiguous window for downstream code
optionally compute a minimal bounding window but prefer returning the ordered
mapped list; change map_context_window_to_modalities to iterate primary_indices
and compute per-index mappings for each latent_shapes[i] (refer to function name
map_context_window_to_modalities and caller create_windows_uniform_looped)
rather than using video_window_len and max(primary_indices).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e8047aba-e470-402b-800d-3b0413df0bd3

📥 Commits

Reviewing files that changed from the base of the PR and between 31283d2 and c4da226.

📒 Files selected for processing (4)
  • comfy/context_windows.py
  • comfy/ldm/lightricks/model.py
  • comfy/model_base.py
  • comfy_extras/nodes_lt.py

Comment thread comfy/context_windows.py Outdated
Comment thread comfy/model_base.py
drozbay added 2 commits April 12, 2026 14:52
…imary windows

The previous window-level calculation collapsed wrapped or strided primary windows into a contiguous audio tail, so audio attended to a different temporal region than the video. Replace with per-frame mapping that computes each primary index's audio span independently and concatenates in order.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
comfy/model_base.py (1)

1159-1159: Consider moving import to module level.

The from comfy.ldm.lightricks.symmetric_patchifier import latent_to_pixel_coords import is inside the method. If this isn't for circular import avoidance, consider moving it to the module-level imports for clarity and to catch import errors at load time.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/model_base.py` at line 1159, The local import of latent_to_pixel_coords
(from comfy.ldm.lightricks.symmetric_patchifier) should be moved to the
module-level imports: remove the inline "from
comfy.ldm.lightricks.symmetric_patchifier import latent_to_pixel_coords" inside
the method and add that import at the top of comfy/model_base.py so import
errors surface at load time and code is clearer; if this local import was
required to avoid a circular import, keep it but add a short comment explaining
why and consider replacing it with a lazy import via importlib.import_module or
catching ImportError with a clear message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@comfy/model_base.py`:
- Line 1159: The local import of latent_to_pixel_coords (from
comfy.ldm.lightricks.symmetric_patchifier) should be moved to the module-level
imports: remove the inline "from comfy.ldm.lightricks.symmetric_patchifier
import latent_to_pixel_coords" inside the method and add that import at the top
of comfy/model_base.py so import errors surface at load time and code is
clearer; if this local import was required to avoid a circular import, keep it
but add a short comment explaining why and consider replacing it with a lazy
import via importlib.import_module or catching ImportError with a clear message.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3d76da94-4ce5-4fde-bc23-9ff45cfd5c85

📥 Commits

Reviewing files that changed from the base of the PR and between c4da226 and 01cebdb.

📒 Files selected for processing (1)
  • comfy/model_base.py

@drozbay drozbay force-pushed the 20260322a_ltx_contextwin branch from 01cebdb to b348c7f Compare April 12, 2026 21:05
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
comfy/model_base.py (1)

1091-1102: Consider guarding against zero-length video dimension.

If latent_shapes[0][dim] is zero (edge case of empty video latent), the division on lines 1101-1102 would raise a ZeroDivisionError. While this is unlikely in normal operation, a defensive check would improve robustness.

🛡️ Suggested defensive check
         video_total = latent_shapes[0][dim]
+        if video_total == 0:
+            return result

         for i in range(1, len(latent_shapes)):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/model_base.py` around lines 1091 - 1102, The code computes video_total
= latent_shapes[0][dim] and then divides by video_total when computing
a_start/a_end, so add a defensive guard immediately after computing video_total
to handle a zero-length primary video dimension: if video_total == 0, either
raise a clear ValueError (e.g. referencing video_total/latent_shapes[0]) or skip
the modality-mapping loop and return/emit empty mod_indices as appropriate for
the caller; update callers or document behavior if you choose to raise. Ensure
you reference video_total, latent_shapes, dim, primary_indices, and the loop
that computes mod_total/mod_indices so the fix is applied in the correct spot.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@comfy/model_base.py`:
- Around line 1091-1102: The code computes video_total = latent_shapes[0][dim]
and then divides by video_total when computing a_start/a_end, so add a defensive
guard immediately after computing video_total to handle a zero-length primary
video dimension: if video_total == 0, either raise a clear ValueError (e.g.
referencing video_total/latent_shapes[0]) or skip the modality-mapping loop and
return/emit empty mod_indices as appropriate for the caller; update callers or
document behavior if you choose to raise. Ensure you reference video_total,
latent_shapes, dim, primary_indices, and the loop that computes
mod_total/mod_indices so the fix is applied in the correct spot.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: bf8f8b7a-40c1-4b9b-a9e8-58d810d8ca83

📥 Commits

Reviewing files that changed from the base of the PR and between 01cebdb and b348c7f.

📒 Files selected for processing (1)
  • comfy/model_base.py

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy/context_windows.py`:
- Around line 536-542: The fusion path currently uses the global
handler.context_overlap rather than the per-modality window info passed as
modality_window; update combine_context_window_results to accept and propagate a
per-modality overlap/profile (derived from modality_window) and change the
overlap-linear fuse code inside it to use that per-modality overlap/profile
instead of handler.context_overlap so ramp lengths and edge weights are computed
per modality; ensure callers (where combine_context_window_results is invoked
with modality_window in the shown loop) pass the modality-specific profile
through and that any downstream helpers that expect a single overlap value are
adjusted to accept the per-modality value.
- Around line 173-175: The code currently raises ValueError when a
guide_attention_entry lacks latent_start; change this to a backwards-compatible
path: in the handler that processes guide_attention_entry (the block referencing
latent_start on the entry object in comfy/context_windows.py), do not
raise—assign a safe default (e.g., latent_start = 0 or the same default used by
built-in LTX nodes) and optionally emit a non-fatal warning via the module
logger (e.g., logger.warning) naming the entry and that latent_start was
assumed, so old custom nodes continue to work without breaking changes.
- Around line 336-351: The FreeNoise step currently shuffles the entire primary
modality including the appended guide suffix; instead, detect and preserve the
guide suffix on modalities[0] (the same trailing guide frame count that
_build_window_state strips) by splitting modalities[0] into prefix and suffix,
call apply_freenoise only on the prefix, then reattach the suffix before
packing; update _apply_freenoise to perform this split/rejoin for modalities[0]
(using the same guide frame count source used by _build_window_state) while
keeping the existing per-modality ratio logic and calling apply_freenoise for
other modalities as before.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d41c0e38-193a-4647-8f70-01c5e716ff8f

📥 Commits

Reviewing files that changed from the base of the PR and between b348c7f and f72583d.

📒 Files selected for processing (2)
  • comfy/context_windows.py
  • comfy/model_base.py

Comment thread comfy/context_windows.py
Comment thread comfy/context_windows.py Outdated
Comment thread comfy/context_windows.py
@drozbay
Copy link
Copy Markdown
Contributor Author

drozbay commented Apr 13, 2026

@coderabbitai resume

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 13, 2026

✅ Actions performed

Reviews resumed.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
comfy/context_windows.py (1)

889-901: Consider adding a bounds check for context_overlap vs length.

If context_overlap exceeds length (e.g., due to rounding when scaling for secondary modalities), the weights_torch[:context_overlap] = ramp_up assignment would fail with a shape mismatch. While unlikely in practice given both values are scaled by the same ratio, a defensive clamp would prevent edge-case runtime errors.

🛡️ Optional defensive fix
 def create_weights_overlap_linear(length: int, full_length: int, idxs: list[int], context_overlap: int, **kwargs):
     # based on code in Kijai's WanVideoWrapper: https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/dbb2523b37e4ccdf45127e5ae33e31362f755c8e/nodes.py#L1302
     # only expected overlap is given different weights
     weights_torch = torch.ones((length))
+    # clamp overlap to prevent shape mismatch in edge cases
+    context_overlap = min(context_overlap, length)
     # blend left-side on all except first window
     if min(idxs) > 0:
         ramp_up = torch.linspace(1e-37, 1, context_overlap)
         weights_torch[:context_overlap] = ramp_up
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/context_windows.py` around lines 889 - 901, The function
create_weights_overlap_linear can fail when context_overlap > length due to
shape-mismatch on weights_torch[:context_overlap] or
weights_torch[-context_overlap:]; clamp and validate context_overlap at the top
(e.g., context_overlap = max(0, min(int(context_overlap), length))) and
early-return or skip ramp generation when context_overlap == 0, then use the
clamped value for ramp_up/ramp_down creation and slicing so weights_torch
assignments for ramp_up and ramp_down cannot exceed weights_torch length; keep
references to create_weights_overlap_linear, context_overlap, length,
weights_torch, ramp_up, and ramp_down when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@comfy/context_windows.py`:
- Around line 889-901: The function create_weights_overlap_linear can fail when
context_overlap > length due to shape-mismatch on
weights_torch[:context_overlap] or weights_torch[-context_overlap:]; clamp and
validate context_overlap at the top (e.g., context_overlap = max(0,
min(int(context_overlap), length))) and early-return or skip ramp generation
when context_overlap == 0, then use the clamped value for ramp_up/ramp_down
creation and slicing so weights_torch assignments for ramp_up and ramp_down
cannot exceed weights_torch length; keep references to
create_weights_overlap_linear, context_overlap, length, weights_torch, ramp_up,
and ramp_down when making the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3819965d-bbd2-49c0-b946-0080f4a64329

📥 Commits

Reviewing files that changed from the base of the PR and between 6a53695 and 6442392.

📒 Files selected for processing (1)
  • comfy/context_windows.py

…t retention doesn't occur on all models when unintended.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
comfy/context_windows.py (1)

713-719: Optional: simplify conds extraction for the FreeNoise path.

guider.conds.get('positive', guider.conds.get('negative', [])) always evaluates the inner .get (negligible cost) and prefers positive even when it's an empty list, while only falling back to negative if the positive key is entirely absent — a mildly surprising ordering. Since _get_guide_entries only needs any cond list that carries guide_attention_entries, passing both is both simpler and slightly safer if a future guider only populates one side. Also, guider.conds itself is assumed to exist; a small getattr fallback avoids tripping up custom guiders that don't subclass CFGGuider.

♻️ Suggested tidy-up
-    conds = [guider.conds.get('positive', guider.conds.get('negative', []))]
-    noise = handler._apply_freenoise(noise, conds, extra_args["seed"])
+    guider_conds = getattr(guider, "conds", {}) or {}
+    conds = [guider_conds.get("positive", []), guider_conds.get("negative", [])]
+    noise = handler._apply_freenoise(noise, conds, extra_args["seed"])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/context_windows.py` around lines 713 - 719, Replace the current
single-key conds extraction with a robust combination that uses getattr(guider,
'conds', {}) to avoid AttributeError on custom guiders and then collects both
'positive' and 'negative' entries (e.g., concat the two lists or pass both
lists) so _apply_freenoise receives any available guide entries; update the
reference in the FreeNoise path (around handler.freenoise and
handler._apply_freenoise) to use this combined conds variable before calling
executor.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy/context_windows.py`:
- Around line 223-228: The current try/except around calling
model.map_context_window_to_modalities hides genuine AttributeErrors raised from
inside that method; instead, check capability first with hasattr(model,
"map_context_window_to_modalities") (or inspect.getattr_static) and raise
NotImplementedError if missing, then call
model.map_context_window_to_modalities(window.index_list, map_shapes, self.dim)
directly so any runtime AttributeError or other exceptions inside that
implementation propagate normally for accurate debugging.

---

Nitpick comments:
In `@comfy/context_windows.py`:
- Around line 713-719: Replace the current single-key conds extraction with a
robust combination that uses getattr(guider, 'conds', {}) to avoid
AttributeError on custom guiders and then collects both 'positive' and
'negative' entries (e.g., concat the two lists or pass both lists) so
_apply_freenoise receives any available guide entries; update the reference in
the FreeNoise path (around handler.freenoise and handler._apply_freenoise) to
use this combined conds variable before calling executor.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 0537d89b-7959-48b7-9741-8a2387ee3a9b

📥 Commits

Reviewing files that changed from the base of the PR and between 6442392 and 4e434bc.

📒 Files selected for processing (2)
  • comfy/context_windows.py
  • comfy_extras/nodes_context_windows.py

Comment thread comfy/context_windows.py
Comment on lines +223 to +228
try:
per_modality_indices = model.map_context_window_to_modalities(
window.index_list, map_shapes, self.dim)
except AttributeError:
raise NotImplementedError(
f"{type(model).__name__} must implement map_context_window_to_modalities for multimodal context windows.")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

AttributeError here can swallow real bugs inside map_context_window_to_modalities.

If the method exists on model but its body raises AttributeError (e.g. touching a missing attribute on self, a typo, an optional submodule returning None), this except will recast it as NotImplementedError("… must implement map_context_window_to_modalities …"), which is actively misleading during debugging. Switching to a hasattr/capability check up front keeps the error surface honest.

🛡️ Suggested guard
-        try:
-            per_modality_indices = model.map_context_window_to_modalities(
-                window.index_list, map_shapes, self.dim)
-        except AttributeError:
-            raise NotImplementedError(
-                f"{type(model).__name__} must implement map_context_window_to_modalities for multimodal context windows.")
+        if not hasattr(model, "map_context_window_to_modalities"):
+            raise NotImplementedError(
+                f"{type(model).__name__} must implement map_context_window_to_modalities for multimodal context windows.")
+        per_modality_indices = model.map_context_window_to_modalities(
+            window.index_list, map_shapes, self.dim)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/context_windows.py` around lines 223 - 228, The current try/except
around calling model.map_context_window_to_modalities hides genuine
AttributeErrors raised from inside that method; instead, check capability first
with hasattr(model, "map_context_window_to_modalities") (or
inspect.getattr_static) and raise NotImplementedError if missing, then call
model.map_context_window_to_modalities(window.index_list, map_shapes, self.dim)
directly so any runtime AttributeError or other exceptions inside that
implementation propagate normally for accurate debugging.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy_extras/nodes_context_windows.py`:
- Around line 90-94: The call in WanContextWindowsManualNode.execute passes an
unexpected kwarg apply_causal_anchor_prefix=False to super().execute(), which
ContextWindowsManualNode.execute and IndexListContextHandler.__init__ don't
accept; remove the apply_causal_anchor_prefix=False argument from the
super().execute(...) call in WanContextWindowsManualNode.execute (or, if the
causal-anchor-prefix behavior is required, add an apply_causal_anchor_prefix
parameter to ContextWindowsManualNode.execute and propagate it into
IndexListContextHandler.__init__ so the flag is accepted and honored).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 07039581-cf9b-40b0-9aef-7564346af4e7

📥 Commits

Reviewing files that changed from the base of the PR and between 4e434bc and e949250.

📒 Files selected for processing (1)
  • comfy_extras/nodes_context_windows.py

Comment on lines 90 to +94
def execute(cls, model: io.Model.Type, context_length: int, context_overlap: int, context_schedule: str, context_stride: int, closed_loop: bool, fuse_method: str, freenoise: bool,
cond_retain_index_list: list[int]=[], split_conds_to_windows: bool=False) -> io.Model:
context_length = max(((context_length - 1) // 4) + 1, 1) # at least length 1
context_overlap = max(((context_overlap - 1) // 4) + 1, 0) # at least overlap 0
return super().execute(model, context_length, context_overlap, context_schedule, context_stride, closed_loop, fuse_method, dim=2, freenoise=freenoise, cond_retain_index_list=cond_retain_index_list, split_conds_to_windows=split_conds_to_windows)
context_length = max(((context_length - 1) // 4) + 1, 1) # at least length 1
context_overlap = max(context_overlap // 4, 0) # at least overlap 0
return super().execute(model, context_length, context_overlap, context_schedule, context_stride, closed_loop, fuse_method, dim=2, freenoise=freenoise, cond_retain_index_list=cond_retain_index_list, split_conds_to_windows=split_conds_to_windows, apply_causal_anchor_prefix=False)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify there is no `apply_causal_anchor_prefix` parameter in the parent
# execute() or in IndexListContextHandler anywhere in the repo.
rg -nP --type=py -C2 '\bapply_causal_anchor_prefix\b'
# Also confirm IndexListContextHandler constructor signature.
ast-grep --pattern $'class IndexListContextHandler($_):
    $$$
    def __init__(self, $$$):
        $$$
'

Repository: Comfy-Org/ComfyUI

Length of output: 34155


🏁 Script executed:

rg -nP --type=py -B5 -A10 'class ContextWindowsManualNode' comfy_extras/

Repository: Comfy-Org/ComfyUI

Length of output: 1357


🏁 Script executed:

rg -nP --type=py -B2 -A15 'def execute\(cls, model: io.Model.Type' comfy_extras/nodes_context_windows.py

Repository: Comfy-Org/ComfyUI

Length of output: 3552


Critical: apply_causal_anchor_prefix=False will cause TypeError on every WanContextWindowsManualNode execution.

Line 94 passes apply_causal_anchor_prefix=False to super().execute(), but the parent ContextWindowsManualNode.execute() (line 41) does not accept this parameter, and neither does IndexListContextHandler.__init__() (comfy/context_windows.py:316). This will immediately raise TypeError: execute() got an unexpected keyword argument 'apply_causal_anchor_prefix'.

Either wire this parameter through the parent execute() and IndexListContextHandler chain, or remove it:

Quick fix if plumbing isn't needed
-        return super().execute(model, context_length, context_overlap, context_schedule, context_stride, closed_loop, fuse_method, dim=2, freenoise=freenoise, cond_retain_index_list=cond_retain_index_list, split_conds_to_windows=split_conds_to_windows, apply_causal_anchor_prefix=False)
+        return super().execute(model, context_length, context_overlap, context_schedule, context_stride, closed_loop, fuse_method, dim=2, freenoise=freenoise, cond_retain_index_list=cond_retain_index_list, split_conds_to_windows=split_conds_to_windows)

If WAN nodes genuinely need to disable a causal-anchor-prefix behavior, both ContextWindowsManualNode.execute() and IndexListContextHandler.__init__() require the matching parameter to be added and honored.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy_extras/nodes_context_windows.py` around lines 90 - 94, The call in
WanContextWindowsManualNode.execute passes an unexpected kwarg
apply_causal_anchor_prefix=False to super().execute(), which
ContextWindowsManualNode.execute and IndexListContextHandler.__init__ don't
accept; remove the apply_causal_anchor_prefix=False argument from the
super().execute(...) call in WanContextWindowsManualNode.execute (or, if the
causal-anchor-prefix behavior is required, add an apply_causal_anchor_prefix
parameter to ContextWindowsManualNode.execute and propagate it into
IndexListContextHandler.__init__ so the flag is accepted and honored).

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
comfy/model_base.py (1)

1136-1150: Falling through when guide_count == 0 looks intentional — just confirming.

When the cond has no guide suffix (guide_count == 0), this branch returns None and lets comfy/context_windows.py handle the slice via the generic cond_value.cond.size(self.dim) == x_in.size(self.dim) path (snippet from context_windows.py:473-520). That should produce the expected per-window mask. Just flagging because the symmetry isn't obvious at a glance — if a future change makes denoise_mask ever larger than x_in along window.dim without populating guide_frames_indices, this fallback would silently slice the wrong region. A small comment in-line noting the intentional fall-through would save the next reader a trip through the caller.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/model_base.py` around lines 1136 - 1150, Add an inline comment in the
denoise_mask branch explaining that when guide_count == 0 we intentionally fall
through and return None so the caller (comfy/context_windows.py) will handle
slicing via the cond_value.cond.size(self.dim) == x_in.size(self.dim) path;
reference the variables and methods used here (cond_value, guide_count,
window.get_tensor, window.guide_frames_indices, sliced_video) so future readers
understand this is deliberate and not a bug, and warn that if denoise_mask can
become larger than x_in without guide_frames_indices being populated, this
behavior would need revisiting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@comfy/model_base.py`:
- Around line 1136-1150: Add an inline comment in the denoise_mask branch
explaining that when guide_count == 0 we intentionally fall through and return
None so the caller (comfy/context_windows.py) will handle slicing via the
cond_value.cond.size(self.dim) == x_in.size(self.dim) path; reference the
variables and methods used here (cond_value, guide_count, window.get_tensor,
window.guide_frames_indices, sliced_video) so future readers understand this is
deliberate and not a bug, and warn that if denoise_mask can become larger than
x_in without guide_frames_indices being populated, this behavior would need
revisiting.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: fd636dc2-66e9-4c3f-8c56-e80d2beb7144

📥 Commits

Reviewing files that changed from the base of the PR and between e949250 and f959e6b.

📒 Files selected for processing (2)
  • comfy/model_base.py
  • comfy_extras/nodes_lt.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • comfy_extras/nodes_lt.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant