Skip to content

Commit cc22b42

Browse files
committed
fix: Fixed spelling in examples/mx readme
Signed-off-by: Brandon Groth <brandon.m.groth@gmail.com>
1 parent 7e7a0fa commit cc22b42

2 files changed

Lines changed: 8 additions & 2 deletions

File tree

.spellcheck-en-custom.txt

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,10 @@ dequantization
2727
dq
2828
DQ
2929
dev
30+
dtype
3031
eval
3132
fms
33+
fmsmo
3234
fp
3335
FP
3436
FP8Arguments
@@ -127,5 +129,9 @@ xs
127129
zp
128130
microxcaling
129131
MX
132+
mx
130133
MXINT
134+
mxint
131135
MXFP
136+
mxfp
137+
OCP

examples/MX/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
# `microscaling` Examples Using a Toy Model and Direct Quantization (DQ)
2-
Microscaling, or "MX", format, such as `MXFP8`, is a different numeric format compared to commonly used FP8 formats. For example, PyTorch provides two FP8 formats, which are 1 sign bit, 4 exponent bits, and 3 mantissa bits (denoted as `e4m3`) or 1 sign bit, 5 exponent bits, and 2 mantissa bits (`e5m2`), see our other [FP8 example](../FP8_QUANT/README.md) for more details. On the other hand, all the `mx` formats are group-based data structure where each member of the group is using the specified format, e.g. FP8 for MXFP8, while each group has a shared (usually 8-bit) "scale". Group size could be as small as 32 or 16, depending on hardware design. One may consider each MXFP8 number actually requires 8.25 bits (when group size is 32) instead of 8 bits. More details about microscaling can be found in [this OCP document](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf).
1+
# `microxcaling` Examples Using a Toy Model and Direct Quantization (DQ)
2+
Microxcaling, or "MX", format, such as `MXFP8`, is a different numeric format compared to commonly used FP8 formats. For example, PyTorch provides two FP8 formats, which are 1 sign bit, 4 exponent bits, and 3 mantissa bits (denoted as `e4m3`) or 1 sign bit, 5 exponent bits, and 2 mantissa bits (`e5m2`), see our other [FP8 example](../FP8_QUANT/README.md) for more details. On the other hand, all the `mx` formats are group-based data structure where each member of the group is using the specified format, e.g. FP8 for MXFP8, while each group has a shared (usually 8-bit) "scale". Group size could be as small as 32 or 16, depending on hardware design. One may consider each MXFP8 number actually requires 8.25 bits (when group size is 32) instead of 8 bits. More details about Microxcaling can be found in [this OCP document](https://www.opencompute.org/documents/ocp-microxcaling-formats-mx-v1-0-spec-final-pdf).
33

44
Here, we provide two simple examples of using MX format in `fms-mo`.
55

0 commit comments

Comments
 (0)