Skip to content

Commit 4fd8d6e

Browse files
docstring update
1 parent e11737b commit 4fd8d6e

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

bitsandbytes/functional.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -842,7 +842,7 @@ def quantize_4bit(
842842
out (`torch.Tensor`, *optional*): A tensor to use to store the result.
843843
blocksize (`int`, *optional*):
844844
The size of the blocks. Defaults to 128 on ROCm and 64 otherwise.
845-
Valid values are 64, 128, 256, 512, 1024, 2048, and 4096.
845+
Valid values are 32, 64, 128, 256, 512, 1024, 2048, and 4096.
846846
compress_statistics (`bool`, *optional*): Whether to additionally quantize the absmax values. Defaults to False.
847847
quant_type (`str`, *optional*): The data type to use: `nf4` or `fp4`. Defaults to `fp4`.
848848
quant_storage (`torch.dtype`, *optional*): The dtype of the tensor used to store the result. Defaults to `torch.uint8`.
@@ -953,7 +953,7 @@ def dequantize_4bit(
953953
out (`torch.Tensor`, *optional*): A tensor to use to store the result.
954954
blocksize (`int`, *optional*):
955955
The size of the blocks. Defaults to 128 on ROCm and 64 otherwise.
956-
Valid values are 64, 128, 256, 512, 1024, 2048, and 4096.
956+
Valid values are 32, 64, 128, 256, 512, 1024, 2048, and 4096.
957957
quant_type (`str`, *optional*): The data type to use: `nf4` or `fp4`. Defaults to `fp4`.
958958
959959
Raises:

0 commit comments

Comments
 (0)