You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: google-gemma/olive/README.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,3 +25,11 @@ The exported ONNX model is saved in `output_model` folder.
25
25
To run the ONNX GenAI model, please set up the latest ONNXRuntime GenAI.
26
26
27
27
The sample chat app to run is found as [model-chat.py](https://github.com/microsoft/onnxruntime-genai/blob/main/examples/python/model-chat.py) in the [onnxruntime-genai](https://github.com/microsoft/onnxruntime-genai/) Github repository.
28
+
29
+
## google/gemma-3-1b-it
30
+
31
+
```bash
32
+
python -m pip install -r requirements.txt
33
+
# Use the following command to export the model using Olive with CPUExecutionProvider at FP32 precision:
34
+
olive run --config gemma-3-1b-it_model_builder_cpu_fp32.json
0 commit comments