feat: Context Windows sampling with LTX2 models and IC-LoRa guides (CORE-3)#13325
feat: Context Windows sampling with LTX2 models and IC-LoRa guides (CORE-3)#13325drozbay wants to merge 28 commits intoComfy-Org:masterfrom
Conversation
…g to context windows
… properly with the retain index list
…ntext windows with guides
…odel. Older LTXV model's guides + context_windows will need to be re-implemented but outside the scope of LTX2 changes
… LTXAV model hooks
…dard execute path
…e variable names, refactor and condense new context window methods to separate execution paths cleanly
…; Condense execution path to treat all latents as potentially multimodal
📝 WalkthroughWalkthroughAdds multimodal-aware context-windowing: introduces WindowingState and extends context-window classes to produce per-modality windows, slice and reassemble latents, inject and strip guide frames, and adjust 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy/context_windows.py`:
- Around line 667-670: The clamp is still running for packed multimodal latents;
change the condition to skip truncation when the handler indicates
packed/multimodal data. Update the if-statement that uses handler.dim,
noise_shape and handler.context_length to require that the handler is NOT a
packed multimodal case (e.g. add "and not getattr(handler, 'packed_multimodal',
False)" or "and not getattr(handler, 'is_packed', False)") before mutating
noise_shape[handler.dim]; this ensures you only min() the value for real
frame-count axes and leave flattened multimodal token axes untouched.
In `@comfy/model_base.py`:
- Around line 1086-1104: The current map_context_window_to_modalities collapses
wrapped/strided video windows by using only len(primary_indices) and
max(primary_indices); instead, map each element of primary_indices individually
into the secondary modality using its relative position within the video
timeline so wrapped/strided geometry is preserved (e.g., for each idx in
primary_indices compute mapped_idx = clamp(round(idx * mod_total / video_total),
0, mod_total-1) and collect those mapped indices in the same order/stride as
primary_indices), then if you need a contiguous window for downstream code
optionally compute a minimal bounding window but prefer returning the ordered
mapped list; change map_context_window_to_modalities to iterate primary_indices
and compute per-index mappings for each latent_shapes[i] (refer to function name
map_context_window_to_modalities and caller create_windows_uniform_looped)
rather than using video_window_len and max(primary_indices).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e8047aba-e470-402b-800d-3b0413df0bd3
📒 Files selected for processing (4)
comfy/context_windows.pycomfy/ldm/lightricks/model.pycomfy/model_base.pycomfy_extras/nodes_lt.py
…imary windows The previous window-level calculation collapsed wrapped or strided primary windows into a contiguous audio tail, so audio attended to a different temporal region than the video. Replace with per-frame mapping that computes each primary index's audio span independently and concatenates in order.
…/ComfyUI into 20260322a_ltx_contextwin
There was a problem hiding this comment.
🧹 Nitpick comments (1)
comfy/model_base.py (1)
1159-1159: Consider moving import to module level.The
from comfy.ldm.lightricks.symmetric_patchifier import latent_to_pixel_coordsimport is inside the method. If this isn't for circular import avoidance, consider moving it to the module-level imports for clarity and to catch import errors at load time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy/model_base.py` at line 1159, The local import of latent_to_pixel_coords (from comfy.ldm.lightricks.symmetric_patchifier) should be moved to the module-level imports: remove the inline "from comfy.ldm.lightricks.symmetric_patchifier import latent_to_pixel_coords" inside the method and add that import at the top of comfy/model_base.py so import errors surface at load time and code is clearer; if this local import was required to avoid a circular import, keep it but add a short comment explaining why and consider replacing it with a lazy import via importlib.import_module or catching ImportError with a clear message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@comfy/model_base.py`:
- Line 1159: The local import of latent_to_pixel_coords (from
comfy.ldm.lightricks.symmetric_patchifier) should be moved to the module-level
imports: remove the inline "from comfy.ldm.lightricks.symmetric_patchifier
import latent_to_pixel_coords" inside the method and add that import at the top
of comfy/model_base.py so import errors surface at load time and code is
clearer; if this local import was required to avoid a circular import, keep it
but add a short comment explaining why and consider replacing it with a lazy
import via importlib.import_module or catching ImportError with a clear message.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3d76da94-4ce5-4fde-bc23-9ff45cfd5c85
📒 Files selected for processing (1)
comfy/model_base.py
01cebdb to
b348c7f
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
comfy/model_base.py (1)
1091-1102: Consider guarding against zero-length video dimension.If
latent_shapes[0][dim]is zero (edge case of empty video latent), the division on lines 1101-1102 would raise aZeroDivisionError. While this is unlikely in normal operation, a defensive check would improve robustness.🛡️ Suggested defensive check
video_total = latent_shapes[0][dim] + if video_total == 0: + return result for i in range(1, len(latent_shapes)):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy/model_base.py` around lines 1091 - 1102, The code computes video_total = latent_shapes[0][dim] and then divides by video_total when computing a_start/a_end, so add a defensive guard immediately after computing video_total to handle a zero-length primary video dimension: if video_total == 0, either raise a clear ValueError (e.g. referencing video_total/latent_shapes[0]) or skip the modality-mapping loop and return/emit empty mod_indices as appropriate for the caller; update callers or document behavior if you choose to raise. Ensure you reference video_total, latent_shapes, dim, primary_indices, and the loop that computes mod_total/mod_indices so the fix is applied in the correct spot.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@comfy/model_base.py`:
- Around line 1091-1102: The code computes video_total = latent_shapes[0][dim]
and then divides by video_total when computing a_start/a_end, so add a defensive
guard immediately after computing video_total to handle a zero-length primary
video dimension: if video_total == 0, either raise a clear ValueError (e.g.
referencing video_total/latent_shapes[0]) or skip the modality-mapping loop and
return/emit empty mod_indices as appropriate for the caller; update callers or
document behavior if you choose to raise. Ensure you reference video_total,
latent_shapes, dim, primary_indices, and the loop that computes
mod_total/mod_indices so the fix is applied in the correct spot.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: bf8f8b7a-40c1-4b9b-a9e8-58d810d8ca83
📒 Files selected for processing (1)
comfy/model_base.py
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy/context_windows.py`:
- Around line 536-542: The fusion path currently uses the global
handler.context_overlap rather than the per-modality window info passed as
modality_window; update combine_context_window_results to accept and propagate a
per-modality overlap/profile (derived from modality_window) and change the
overlap-linear fuse code inside it to use that per-modality overlap/profile
instead of handler.context_overlap so ramp lengths and edge weights are computed
per modality; ensure callers (where combine_context_window_results is invoked
with modality_window in the shown loop) pass the modality-specific profile
through and that any downstream helpers that expect a single overlap value are
adjusted to accept the per-modality value.
- Around line 173-175: The code currently raises ValueError when a
guide_attention_entry lacks latent_start; change this to a backwards-compatible
path: in the handler that processes guide_attention_entry (the block referencing
latent_start on the entry object in comfy/context_windows.py), do not
raise—assign a safe default (e.g., latent_start = 0 or the same default used by
built-in LTX nodes) and optionally emit a non-fatal warning via the module
logger (e.g., logger.warning) naming the entry and that latent_start was
assumed, so old custom nodes continue to work without breaking changes.
- Around line 336-351: The FreeNoise step currently shuffles the entire primary
modality including the appended guide suffix; instead, detect and preserve the
guide suffix on modalities[0] (the same trailing guide frame count that
_build_window_state strips) by splitting modalities[0] into prefix and suffix,
call apply_freenoise only on the prefix, then reattach the suffix before
packing; update _apply_freenoise to perform this split/rejoin for modalities[0]
(using the same guide frame count source used by _build_window_state) while
keeping the existing per-modality ratio logic and calling apply_freenoise for
other modalities as before.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d41c0e38-193a-4647-8f70-01c5e716ff8f
📒 Files selected for processing (2)
comfy/context_windows.pycomfy/model_base.py
|
@coderabbitai resume |
✅ Actions performedReviews resumed. |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
comfy/context_windows.py (1)
889-901: Consider adding a bounds check forcontext_overlapvslength.If
context_overlapexceedslength(e.g., due to rounding when scaling for secondary modalities), theweights_torch[:context_overlap] = ramp_upassignment would fail with a shape mismatch. While unlikely in practice given both values are scaled by the same ratio, a defensive clamp would prevent edge-case runtime errors.🛡️ Optional defensive fix
def create_weights_overlap_linear(length: int, full_length: int, idxs: list[int], context_overlap: int, **kwargs): # based on code in Kijai's WanVideoWrapper: https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/dbb2523b37e4ccdf45127e5ae33e31362f755c8e/nodes.py#L1302 # only expected overlap is given different weights weights_torch = torch.ones((length)) + # clamp overlap to prevent shape mismatch in edge cases + context_overlap = min(context_overlap, length) # blend left-side on all except first window if min(idxs) > 0: ramp_up = torch.linspace(1e-37, 1, context_overlap) weights_torch[:context_overlap] = ramp_up🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy/context_windows.py` around lines 889 - 901, The function create_weights_overlap_linear can fail when context_overlap > length due to shape-mismatch on weights_torch[:context_overlap] or weights_torch[-context_overlap:]; clamp and validate context_overlap at the top (e.g., context_overlap = max(0, min(int(context_overlap), length))) and early-return or skip ramp generation when context_overlap == 0, then use the clamped value for ramp_up/ramp_down creation and slicing so weights_torch assignments for ramp_up and ramp_down cannot exceed weights_torch length; keep references to create_weights_overlap_linear, context_overlap, length, weights_torch, ramp_up, and ramp_down when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@comfy/context_windows.py`:
- Around line 889-901: The function create_weights_overlap_linear can fail when
context_overlap > length due to shape-mismatch on
weights_torch[:context_overlap] or weights_torch[-context_overlap:]; clamp and
validate context_overlap at the top (e.g., context_overlap = max(0,
min(int(context_overlap), length))) and early-return or skip ramp generation
when context_overlap == 0, then use the clamped value for ramp_up/ramp_down
creation and slicing so weights_torch assignments for ramp_up and ramp_down
cannot exceed weights_torch length; keep references to
create_weights_overlap_linear, context_overlap, length, weights_torch, ramp_up,
and ramp_down when making the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3819965d-bbd2-49c0-b946-0080f4a64329
📒 Files selected for processing (1)
comfy/context_windows.py
…t retention doesn't occur on all models when unintended.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
comfy/context_windows.py (1)
713-719: Optional: simplify conds extraction for the FreeNoise path.
guider.conds.get('positive', guider.conds.get('negative', []))always evaluates the inner.get(negligible cost) and preferspositiveeven when it's an empty list, while only falling back tonegativeif thepositivekey is entirely absent — a mildly surprising ordering. Since_get_guide_entriesonly needs any cond list that carriesguide_attention_entries, passing both is both simpler and slightly safer if a future guider only populates one side. Also,guider.condsitself is assumed to exist; a smallgetattrfallback avoids tripping up custom guiders that don't subclassCFGGuider.♻️ Suggested tidy-up
- conds = [guider.conds.get('positive', guider.conds.get('negative', []))] - noise = handler._apply_freenoise(noise, conds, extra_args["seed"]) + guider_conds = getattr(guider, "conds", {}) or {} + conds = [guider_conds.get("positive", []), guider_conds.get("negative", [])] + noise = handler._apply_freenoise(noise, conds, extra_args["seed"])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy/context_windows.py` around lines 713 - 719, Replace the current single-key conds extraction with a robust combination that uses getattr(guider, 'conds', {}) to avoid AttributeError on custom guiders and then collects both 'positive' and 'negative' entries (e.g., concat the two lists or pass both lists) so _apply_freenoise receives any available guide entries; update the reference in the FreeNoise path (around handler.freenoise and handler._apply_freenoise) to use this combined conds variable before calling executor.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy/context_windows.py`:
- Around line 223-228: The current try/except around calling
model.map_context_window_to_modalities hides genuine AttributeErrors raised from
inside that method; instead, check capability first with hasattr(model,
"map_context_window_to_modalities") (or inspect.getattr_static) and raise
NotImplementedError if missing, then call
model.map_context_window_to_modalities(window.index_list, map_shapes, self.dim)
directly so any runtime AttributeError or other exceptions inside that
implementation propagate normally for accurate debugging.
---
Nitpick comments:
In `@comfy/context_windows.py`:
- Around line 713-719: Replace the current single-key conds extraction with a
robust combination that uses getattr(guider, 'conds', {}) to avoid
AttributeError on custom guiders and then collects both 'positive' and
'negative' entries (e.g., concat the two lists or pass both lists) so
_apply_freenoise receives any available guide entries; update the reference in
the FreeNoise path (around handler.freenoise and handler._apply_freenoise) to
use this combined conds variable before calling executor.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0537d89b-7959-48b7-9741-8a2387ee3a9b
📒 Files selected for processing (2)
comfy/context_windows.pycomfy_extras/nodes_context_windows.py
| try: | ||
| per_modality_indices = model.map_context_window_to_modalities( | ||
| window.index_list, map_shapes, self.dim) | ||
| except AttributeError: | ||
| raise NotImplementedError( | ||
| f"{type(model).__name__} must implement map_context_window_to_modalities for multimodal context windows.") |
There was a problem hiding this comment.
AttributeError here can swallow real bugs inside map_context_window_to_modalities.
If the method exists on model but its body raises AttributeError (e.g. touching a missing attribute on self, a typo, an optional submodule returning None), this except will recast it as NotImplementedError("… must implement map_context_window_to_modalities …"), which is actively misleading during debugging. Switching to a hasattr/capability check up front keeps the error surface honest.
🛡️ Suggested guard
- try:
- per_modality_indices = model.map_context_window_to_modalities(
- window.index_list, map_shapes, self.dim)
- except AttributeError:
- raise NotImplementedError(
- f"{type(model).__name__} must implement map_context_window_to_modalities for multimodal context windows.")
+ if not hasattr(model, "map_context_window_to_modalities"):
+ raise NotImplementedError(
+ f"{type(model).__name__} must implement map_context_window_to_modalities for multimodal context windows.")
+ per_modality_indices = model.map_context_window_to_modalities(
+ window.index_list, map_shapes, self.dim)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@comfy/context_windows.py` around lines 223 - 228, The current try/except
around calling model.map_context_window_to_modalities hides genuine
AttributeErrors raised from inside that method; instead, check capability first
with hasattr(model, "map_context_window_to_modalities") (or
inspect.getattr_static) and raise NotImplementedError if missing, then call
model.map_context_window_to_modalities(window.index_list, map_shapes, self.dim)
directly so any runtime AttributeError or other exceptions inside that
implementation propagate normally for accurate debugging.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy_extras/nodes_context_windows.py`:
- Around line 90-94: The call in WanContextWindowsManualNode.execute passes an
unexpected kwarg apply_causal_anchor_prefix=False to super().execute(), which
ContextWindowsManualNode.execute and IndexListContextHandler.__init__ don't
accept; remove the apply_causal_anchor_prefix=False argument from the
super().execute(...) call in WanContextWindowsManualNode.execute (or, if the
causal-anchor-prefix behavior is required, add an apply_causal_anchor_prefix
parameter to ContextWindowsManualNode.execute and propagate it into
IndexListContextHandler.__init__ so the flag is accepted and honored).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 07039581-cf9b-40b0-9aef-7564346af4e7
📒 Files selected for processing (1)
comfy_extras/nodes_context_windows.py
| def execute(cls, model: io.Model.Type, context_length: int, context_overlap: int, context_schedule: str, context_stride: int, closed_loop: bool, fuse_method: str, freenoise: bool, | ||
| cond_retain_index_list: list[int]=[], split_conds_to_windows: bool=False) -> io.Model: | ||
| context_length = max(((context_length - 1) // 4) + 1, 1) # at least length 1 | ||
| context_overlap = max(((context_overlap - 1) // 4) + 1, 0) # at least overlap 0 | ||
| return super().execute(model, context_length, context_overlap, context_schedule, context_stride, closed_loop, fuse_method, dim=2, freenoise=freenoise, cond_retain_index_list=cond_retain_index_list, split_conds_to_windows=split_conds_to_windows) | ||
| context_length = max(((context_length - 1) // 4) + 1, 1) # at least length 1 | ||
| context_overlap = max(context_overlap // 4, 0) # at least overlap 0 | ||
| return super().execute(model, context_length, context_overlap, context_schedule, context_stride, closed_loop, fuse_method, dim=2, freenoise=freenoise, cond_retain_index_list=cond_retain_index_list, split_conds_to_windows=split_conds_to_windows, apply_causal_anchor_prefix=False) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify there is no `apply_causal_anchor_prefix` parameter in the parent
# execute() or in IndexListContextHandler anywhere in the repo.
rg -nP --type=py -C2 '\bapply_causal_anchor_prefix\b'
# Also confirm IndexListContextHandler constructor signature.
ast-grep --pattern $'class IndexListContextHandler($_):
$$$
def __init__(self, $$$):
$$$
'Repository: Comfy-Org/ComfyUI
Length of output: 34155
🏁 Script executed:
rg -nP --type=py -B5 -A10 'class ContextWindowsManualNode' comfy_extras/Repository: Comfy-Org/ComfyUI
Length of output: 1357
🏁 Script executed:
rg -nP --type=py -B2 -A15 'def execute\(cls, model: io.Model.Type' comfy_extras/nodes_context_windows.pyRepository: Comfy-Org/ComfyUI
Length of output: 3552
Critical: apply_causal_anchor_prefix=False will cause TypeError on every WanContextWindowsManualNode execution.
Line 94 passes apply_causal_anchor_prefix=False to super().execute(), but the parent ContextWindowsManualNode.execute() (line 41) does not accept this parameter, and neither does IndexListContextHandler.__init__() (comfy/context_windows.py:316). This will immediately raise TypeError: execute() got an unexpected keyword argument 'apply_causal_anchor_prefix'.
Either wire this parameter through the parent execute() and IndexListContextHandler chain, or remove it:
Quick fix if plumbing isn't needed
- return super().execute(model, context_length, context_overlap, context_schedule, context_stride, closed_loop, fuse_method, dim=2, freenoise=freenoise, cond_retain_index_list=cond_retain_index_list, split_conds_to_windows=split_conds_to_windows, apply_causal_anchor_prefix=False)
+ return super().execute(model, context_length, context_overlap, context_schedule, context_stride, closed_loop, fuse_method, dim=2, freenoise=freenoise, cond_retain_index_list=cond_retain_index_list, split_conds_to_windows=split_conds_to_windows)If WAN nodes genuinely need to disable a causal-anchor-prefix behavior, both ContextWindowsManualNode.execute() and IndexListContextHandler.__init__() require the matching parameter to be added and honored.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@comfy_extras/nodes_context_windows.py` around lines 90 - 94, The call in
WanContextWindowsManualNode.execute passes an unexpected kwarg
apply_causal_anchor_prefix=False to super().execute(), which
ContextWindowsManualNode.execute and IndexListContextHandler.__init__ don't
accept; remove the apply_causal_anchor_prefix=False argument from the
super().execute(...) call in WanContextWindowsManualNode.execute (or, if the
causal-anchor-prefix behavior is required, add an apply_causal_anchor_prefix
parameter to ContextWindowsManualNode.execute and propagate it into
IndexListContextHandler.__init__ so the flag is accepted and honored).
There was a problem hiding this comment.
🧹 Nitpick comments (1)
comfy/model_base.py (1)
1136-1150: Falling through whenguide_count == 0looks intentional — just confirming.When the cond has no guide suffix (
guide_count == 0), this branch returnsNoneand letscomfy/context_windows.pyhandle the slice via the genericcond_value.cond.size(self.dim) == x_in.size(self.dim)path (snippet fromcontext_windows.py:473-520). That should produce the expected per-window mask. Just flagging because the symmetry isn't obvious at a glance — if a future change makesdenoise_maskever larger thanx_inalongwindow.dimwithout populatingguide_frames_indices, this fallback would silently slice the wrong region. A small comment in-line noting the intentional fall-through would save the next reader a trip through the caller.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy/model_base.py` around lines 1136 - 1150, Add an inline comment in the denoise_mask branch explaining that when guide_count == 0 we intentionally fall through and return None so the caller (comfy/context_windows.py) will handle slicing via the cond_value.cond.size(self.dim) == x_in.size(self.dim) path; reference the variables and methods used here (cond_value, guide_count, window.get_tensor, window.guide_frames_indices, sliced_video) so future readers understand this is deliberate and not a bug, and warn that if denoise_mask can become larger than x_in without guide_frames_indices being populated, this behavior would need revisiting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@comfy/model_base.py`:
- Around line 1136-1150: Add an inline comment in the denoise_mask branch
explaining that when guide_count == 0 we intentionally fall through and return
None so the caller (comfy/context_windows.py) will handle slicing via the
cond_value.cond.size(self.dim) == x_in.size(self.dim) path; reference the
variables and methods used here (cond_value, guide_count, window.get_tensor,
window.guide_frames_indices, sliced_video) so future readers understand this is
deliberate and not a bug, and warn that if denoise_mask can become larger than
x_in without guide_frames_indices being populated, this behavior would need
revisiting.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fd636dc2-66e9-4c3f-8c56-e80d2beb7144
📒 Files selected for processing (2)
comfy/model_base.pycomfy_extras/nodes_lt.py
🚧 Files skipped from review as they are similar to previous changes (1)
- comfy_extras/nodes_lt.py
Enables LTX-2 (LTXAV) multimodal video+audio models to work with context windows, and reworks IC-LoRA guide handling so guides at arbitrary positions get spliced correctly into each window they overlap.
Context windowing now treats every input as potentially multi-modal via a new
WindowingStatethat holds per-modality latents and also the concatenated guide latents if present. A newmap_context_window_to_modalitieshook on the LTXAV model derives per-modality windows from the primary (video) window. LTXAV's guide-specific slicing (denoise_mask,keyframe_idxs,guide_attention_entries, audio mask) lives on the model via theresize_cond_for_context_windowpattern.Testing and workflows
Exercises guides at both ends of the clip and the partial-overlap path where a guide only participates in a subset of windows.
LTX2_Stage2_00008_.mp4
droz_LTX-2_ContextWindows_FFLF_Test.json
(Asset) Start and end frames:


LTX2_Stage2_00012_.mp4
droz_LTX-2_ContextWindows_CannyTest.json
(Asset) Input Guide Video:
CannyTest.mp4
WanAnimate_00004_.mp4
droz_WanAnimate_ContextWindows_Native_v1.1.json