Skip to content

Commit 42ec7b0

Browse files
committed
bugfix
1 parent d049fb6 commit 42ec7b0

File tree

2 files changed

+0
-4
lines changed

2 files changed

+0
-4
lines changed

examples/flux/model_training/train.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,4 @@ def forward(self, data, inputs=None):
117117
remove_prefix_in_ckpt=args.remove_prefix_in_ckpt,
118118
state_dict_converter=FluxLoRAConverter.align_to_opensource_format if args.align_to_opensource_format else lambda x:x,
119119
)
120-
optimizer = torch.optim.AdamW(model.trainable_modules(), lr=args.learning_rate, weight_decay=args.weight_decay)
121-
scheduler = torch.optim.lr_scheduler.ConstantLR(optimizer)
122120
launch_training_task(dataset, model, model_logger, args=args)

examples/wanvideo/model_training/train.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -126,6 +126,4 @@ def forward(self, data, inputs=None):
126126
args.output_path,
127127
remove_prefix_in_ckpt=args.remove_prefix_in_ckpt
128128
)
129-
optimizer = torch.optim.AdamW(model.trainable_modules(), lr=args.learning_rate, weight_decay=args.weight_decay)
130-
scheduler = torch.optim.lr_scheduler.ConstantLR(optimizer)
131129
launch_training_task(dataset, model, model_logger, args=args)

0 commit comments

Comments
 (0)