Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions docs/docs/advanced/anymodel.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,14 +31,16 @@ dataset = InstructionDataset(dataset_path)

To initialize the model, simply run the following 2 commands:
```python
from xturing.models import GenericModel
from xturing.models import GenericLoraModel

model_path = 'aleksickx/llama-7b-hf'
model_path = "Qwen/Qwen2.5-0.5B"

model = GenericLoraModel(model_path)
```
The _'model_path'_ can be a locally saved model and/or any model available on the HuggingFace's [Model Hub](https://huggingface.co/models).

If you are following older notebooks that reference legacy `llama-7b-hf` mirrors, prefer current upstream checkpoints. Legacy mirrors can fail on newer `transformers` releases.

To fine-tune the model on a dataset, we will use the default configuration for the fine-tuning.

```python
Expand Down
2 changes: 1 addition & 1 deletion examples/features/int4_finetuning/LLaMA_lora_int4.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@
"\n",
"instruction_dataset = InstructionDataset(\"../../models/llama/alpaca_data\")\n",
"# Initializes the model\n",
"model = GenericLoraKbitModel('aleksickx/llama-7b-hf')"
"model = GenericLoraKbitModel('Qwen/Qwen2.5-0.5B')"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/features/int4_finetuning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ You are encouraged to submit your performance results on other GPUs/configs/mode

## 📚 Tutorial

All instructions are inside the example [notebook](LLaMA_lora_int4.ipynb). **_Special Note:_** Using this demo requires you to have appropriate access to LLaMA weights. To apply access to it through this [link](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform).
All instructions are inside the example [notebook](LLaMA_lora_int4.ipynb). **_Special Note:_** some older mirrored LLaMA checkpoints are no longer compatible with recent `transformers` versions. Use a currently maintained checkpoint path (for example `Qwen/Qwen2.5-0.5B`) in the notebook for reliable setup.

<br>

Expand Down