Commit b9d6420
authored
Fix AuraFlow attn processors applying norm_added_q to key projection (#13533)
Both AuraFlowAttnProcessor2_0 and FusedAuraFlowAttnProcessor2_0 were
calling attn.norm_added_q on encoder_hidden_states_key_proj while
guarded by a check on attn.norm_added_k. This applies the query
normalization layer to the key, which is a copy-paste error.
Consistent with every other attention processor in this file that
defines both norm_added_q and norm_added_k (e.g. FluxAttnProcessor,
CogVideoXAttnProcessor, HunyuanAttnProcessor), where norm_added_k is
applied to the added key projection.1 parent 3d30b7d commit b9d6420
1 file changed
Lines changed: 2 additions & 2 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
2140 | 2140 | | |
2141 | 2141 | | |
2142 | 2142 | | |
2143 | | - | |
| 2143 | + | |
2144 | 2144 | | |
2145 | 2145 | | |
2146 | 2146 | | |
| |||
2237 | 2237 | | |
2238 | 2238 | | |
2239 | 2239 | | |
2240 | | - | |
| 2240 | + | |
2241 | 2241 | | |
2242 | 2242 | | |
2243 | 2243 | | |
| |||
0 commit comments