Skip to content

Commit b91c063

Browse files
committed
fix: accept **kwargs in Params4bit and Int8Params constructors
accelerate's set_module_tensor_to_device passes the old parameter's __dict__ as **kwargs when reconstituting quantized parameters. Transformers >= 5.x adds _is_hf_initialized to every parameter's __dict__, but Params4bit.__new__ and Int8Params.__new__ had fixed signatures — causing TypeError on any model loaded with BitsAndBytesConfig + device_map="auto". Add **kwargs to both constructors so unexpected attributes are silently ignored instead of crashing.
1 parent 925d83e commit b91c063

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

bitsandbytes/nn/modules.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -222,6 +222,7 @@ def __new__(
222222
quant_storage: torch.dtype = torch.uint8,
223223
module: Optional["Linear4bit"] = None,
224224
bnb_quantized: bool = False,
225+
**kwargs,
225226
) -> "Params4bit":
226227
if data is None:
227228
data = torch.empty(0)
@@ -680,6 +681,7 @@ def __new__(
680681
has_fp16_weights=False,
681682
CB: Optional[torch.Tensor] = None,
682683
SCB: Optional[torch.Tensor] = None,
684+
**kwargs,
683685
):
684686
if data is None:
685687
data = torch.empty(0)

0 commit comments

Comments
 (0)