Skip to content

Commit 925d83e

Browse files
Merge pull request #1893 from ailuntz/fix/matmul-4bit-out
Honor out in matmul_4bit
2 parents 713a3b8 + c25c294 commit 925d83e

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

bitsandbytes/autograd/_functions.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -318,6 +318,9 @@ def forward(ctx, A, B, out=None, bias=None, quant_state: Optional[F.QuantState]
318318
# 1. Dequantize
319319
# 2. MatmulnN
320320
output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias)
321+
if out is not None:
322+
out.copy_(output)
323+
output = out
321324

322325
# 3. Save state
323326
ctx.state = quant_state

0 commit comments

Comments
 (0)