Skip to content

Commit 81697ba

Browse files
authored
Fix document link (microsoft#2126)
## Describe your changes Fix document link ## Checklist before requesting a review - [ ] Add unit tests for this change. - [ ] Make sure all tests can pass. - [ ] Update documents if necessary. - [ ] Lint and apply fixes to your code by running `lintrunner -a` - [ ] Is this a user-facing change? If yes, give a description of this change to be included in the release notes. - [ ] Is this PR including examples changes? If yes, please remember to update [example documentation](https://github.com/microsoft/Olive/blob/main/docs/source/examples.md) in a follow-up PR. ## (Optional) Issue link
1 parent 7e4c540 commit 81697ba

2 files changed

Lines changed: 3 additions & 3 deletions

File tree

docs/source/how-to/cli/cli-optimize.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ This command will quantize weights into int4 precision before converting the mod
3030

3131
## Customizing model optimization process
3232

33-
`olive optimize` primarily requests desired model precision and intended ExecutionProvider that will be used to run the optimized model. Based on these information, `olive optimize` command will generate model optimiation recipe as per user request and execute the recipe to produce to output model. Advanced users can use `--dry_run` option to save the `config.json` file on the disk. See comprehensive list of [options](../reference/options.html) you can use to customize the model optimization process further by modifying the `config.json` file produced by the `olive optimize --dry_run ...` command.
33+
`olive optimize` primarily requests desired model precision and intended ExecutionProvider that will be used to run the optimized model. Based on these information, `olive optimize` command will generate model optimiation recipe as per user request and execute the recipe to produce to output model. Advanced users can use `--dry_run` option to save the `config.json` file on the disk. See comprehensive list of [options](../../reference/options.html) you can use to customize the model optimization process further by modifying the `config.json` file produced by the `olive optimize --dry_run ...` command.
3434

3535
## Additional details
3636

37-
See `olive optimize` [reference](../reference/python_api.md#optimize) for the complete list of supported options.
37+
See `olive optimize` [reference](../../reference/python_api.md#optimize) for the complete list of supported options by this command.

docs/source/how-to/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
The Olive CLI provides a set of primitives such as `quantize`, `finetune`, `onnx-graph-capture`, `auto-opt` that enable you to *easily* optimize select models and experiment with different cutting-edge optimization strategies without the need to define workflows.
99

10-
-[How to use the `olive optimize` command to optimize a Pytorch model](cli/cli-optimize)
10+
- [How to use the `olive optimize` command to optimize a Pytorch model](cli/cli-optimize)
1111
- [How to use the `olive auto-opt` command to take a PyTorch/Hugging Face model and turn it into an optimized ONNX model](cli/cli-auto-opt)
1212
- [how to use the `olive finetune` command to create (Q)LoRA adapters](cli/cli-finetune)
1313
- [How to use the `olive quantize` command to quantize your model with different precisions and techniques such as AWQ](cli/cli-quantize)

0 commit comments

Comments
 (0)