Skip to content

Commit 6d8f9e7

Browse files
committed
update readme
Signed-off-by: h-guo18 <67671475+h-guo18@users.noreply.github.com>
1 parent 82ce309 commit 6d8f9e7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/speculative_decoding/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ For small base models that fit in GPU memory, we can collocate them with draft m
8080
training.output_dir=ckpts/llama-3.2-1b-online
8181
```
8282

83-
All default training settings live in `eagle3.yaml`; override any field via OmegaConf dotlist arguments on the command line.
83+
All default training settings are in `eagle3.yaml`. You can adjust them by editing the YAML file or by specifying command-line overrides with OmegaConf dotlist arguments.
8484

8585
To enable context parallelism for long-context training, add `training.cp_size=<N>`.
8686
The saved modelopt checkpoint is similar in architecture to HF models. It can be further optimized through **ModelOpt**, e.g., PTQ and QAT.

0 commit comments

Comments
 (0)