-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX: bugs of assign=True in load lora #2240
base: main
Are you sure you want to change the base?
Conversation
When I load Flux trained lora through: ``` from diffusers import AutoPipelineForText2Image, FluxPipeline from safetensors.torch import load_file pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipe.load_lora_weights("model_qk_text.safetensors") ``` It raised this problem: ``` pipe.load_lora_weights("model_qk_text.safetensors") File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1848, in load_lora_weights self.load_lora_into_transformer( File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1951, in load_lora_into_transformer incompatible_keys = set_peft_model_state_dict(transformer, state_dict, adapter_name, **peft_kwargs) File "/usr/local/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 458, in set_peft_model_state_dict load_result = model.load_state_dict(peft_model_state_dict, strict=False, assign=True) ``` After remove `assign=True`, it all works.
Thanks for reporting this error. We cannot change the argument just like that, as this will lead to failure in loading other models. Instead, let's try to debug why Flux fails in this case. As a first step, could you please check if loading while passing |
|
Before we do that, we need to first understand why this adapter causes the issue, while others work. Then we can think of the best solution. I'll take a look at it when I have a bit of time on my hands. |
I have a bit of time to investigate the issue this week. Do you know of a publicly available LoRA Flux adapter that causes the issue you described (only safetensors)? That way, I can try to reproduce the error. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
When I load Flux trained lora through:
It raised this problem:
After remove
assign=True
, it all works.