Skip to content

Commit 59c1cf7

Browse files
committed
update readme
Signed-off-by: h-guo18 <67671475+h-guo18@users.noreply.github.com>
1 parent adf1d27 commit 59c1cf7

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

examples/speculative_decoding/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ For small base models that fit in GPU memory, we can collocate them with draft m
8282

8383
All default training settings live in `eagle3.yaml`; override any field via OmegaConf dotlist arguments on the command line.
8484

85-
To enable context parallelism for long-context training, add `training.cp_size=<N>` to the overrides.
85+
To enable context parallelism for long-context training, add `training.cp_size=<N>`.
8686
The saved modelopt checkpoint is similar in architecture to HF models. It can be further optimized through **ModelOpt**, e.g., PTQ and QAT.
8787

8888
## Training Draft Model with Offline Base Model

0 commit comments

Comments
 (0)