Skip to content

Commit 61e2e4c

Browse files
committed
fix: bump up bitsandbytes version to support torch==2.6.0 upgrade
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
1 parent 2c8feba commit 61e2e4c

1 file changed

Lines changed: 3 additions & 4 deletions

File tree

plugins/accelerated-peft/requirements.txt

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,9 @@
55
accelerate >= 0.29
66

77
# bitsandbytes for the BNB plugin
8-
# - lower bound is because bnb is missing quant_state
9-
# - upper bound is because of segmentation faults
10-
# see https://github.com/foundation-model-stack/fms-acceleration/issues/17
11-
bitsandbytes >=0.41,<=0.43.3
8+
# exact version is needed 0.45.1 for torch upgrade to 2.6
9+
10+
bitsandbytes == 0.45.1
1211

1312
# Used to manage the thread limit in functions for converting old
1413
# GPTQ models to new GPTQ model format that support symmetrical=False

0 commit comments

Comments
 (0)