Skip to content

Commit 7e85a3f

Browse files
TimDettmersclaude
andcommitted
fix: Remove unused dequantize_nvfp4 import in LinearNVFP4
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent f3916d4 commit 7e85a3f

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

bitsandbytes/nn/modules.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -718,7 +718,7 @@ def forward(self, x: torch.Tensor) -> torch.Tensor:
718718
if not self.weight_quantized:
719719
self._quantize_weight()
720720

721-
from bitsandbytes.functional import dequantize_nvfp4, gemm_nvfp4, quantize_nvfp4
721+
from bitsandbytes.functional import gemm_nvfp4, quantize_nvfp4
722722

723723
inp_dtype = x.dtype
724724
input_shape = x.shape

0 commit comments

Comments
 (0)