Skip to content

Commit

Permalink
DOC In-place modification through get_peft_model (#2313)
Browse files Browse the repository at this point in the history
  • Loading branch information
d-kleine authored Jan 9, 2025
1 parent 8d3039b commit af637ac
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 1 deletion.
3 changes: 3 additions & 0 deletions docs/source/tutorial/peft_model_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,9 @@ lora_model.print_trainable_parameters()
"trainable params: 1,572,864 || all params: 332,769,280 || trainable%: 0.472659014678278"
```

> [!WARNING]
> When calling [`get_peft_model`], the base model will be modified *in-place*. That means, when calling [`get_peft_model`] on a model that was already modified in the same way before, this model will be further mutated. Therefore, if you would like to modify your PEFT configuration after having called [`get_peft_model()`] before, you would first have to unload the model with [`~LoraModel.unload`] and then call [`get_peft_model()`] with your new configuration. Alternatively, you can re-initialize the model to ensure a fresh, unmodified state before applying a new PEFT configuration.
Now you can train the [`PeftModel`] with your preferred training framework! After training, you can save your model locally with [`~PeftModel.save_pretrained`] or upload it to the Hub with the [`~transformers.PreTrainedModel.push_to_hub`] method.

```py
Expand Down
2 changes: 1 addition & 1 deletion src/peft/mapping.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ def get_peft_model(
low_cpu_mem_usage: bool = False,
) -> PeftModel | PeftMixedModel:
"""
Returns a Peft model object from a model and a config.
Returns a Peft model object from a model and a config, where the model will be modified in-place.
Args:
model ([`transformers.PreTrainedModel`]):
Expand Down

0 comments on commit af637ac

Please sign in to comment.