We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent c72f9e8 commit 930c9baCopy full SHA for 930c9ba
docs/source/quickstart.mdx
@@ -27,12 +27,12 @@ bitsandbytes provides three main features:
27
Load and run a model using 8-bit quantization:
28
29
```py
30
-from transformers import AutoModelForCausalLM, AutoTokenizer
+from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
31
32
model = AutoModelForCausalLM.from_pretrained(
33
"meta-llama/Llama-2-7b-hf",
34
device_map="auto",
35
- load_in_8bit=True,
+ quantization_config=BitsAndBytesConfig(load_in_8bit=True),
36
)
37
38
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
0 commit comments