Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: bugs of assign=True in load lora #2240

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

tomguluson92
Copy link

When I load Flux trained lora through:

from diffusers import AutoPipelineForText2Image, FluxPipeline
from safetensors.torch import load_file

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipe.load_lora_weights("model_qk_text.safetensors")

It raised this problem:

    pipe.load_lora_weights("model_qk_text.safetensors")
  File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1848, in load_lora_weights
    self.load_lora_into_transformer(
  File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1951, in load_lora_into_transformer
    incompatible_keys = set_peft_model_state_dict(transformer, state_dict, adapter_name, **peft_kwargs)
  File "/usr/local/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 458, in set_peft_model_state_dict
    load_result = model.load_state_dict(peft_model_state_dict, strict=False, assign=True)

After remove assign=True, it all works.

When I load Flux trained lora through:
```
from diffusers import AutoPipelineForText2Image, FluxPipeline
from safetensors.torch import load_file

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipe.load_lora_weights("model_qk_text.safetensors")
```

It raised this problem: 
```
    pipe.load_lora_weights("model_qk_text.safetensors")
  File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1848, in load_lora_weights
    self.load_lora_into_transformer(
  File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1951, in load_lora_into_transformer
    incompatible_keys = set_peft_model_state_dict(transformer, state_dict, adapter_name, **peft_kwargs)
  File "/usr/local/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 458, in set_peft_model_state_dict
    load_result = model.load_state_dict(peft_model_state_dict, strict=False, assign=True)
```

After remove `assign=True`, it all works.
@tomguluson92 tomguluson92 changed the title fixs: bugs of assign=True Fix: bugs of assign=True Nov 28, 2024
@tomguluson92 tomguluson92 changed the title Fix: bugs of assign=True FIX: bugs of assign=True in load lora Nov 28, 2024
@BenjaminBossan
Copy link
Member

BenjaminBossan commented Nov 28, 2024

Thanks for reporting this error. We cannot change the argument just like that, as this will lead to failure in loading other models. Instead, let's try to debug why Flux fails in this case. As a first step, could you please check if loading while passing low_cpu_mem_usage=False to load_lora_weights resolves your error?

@tomguluson92
Copy link
Author

tomguluson92 commented Nov 28, 2024

low_cpu_mem_usage=False is work, so what's your opinion on this problem? Should we add a special flag for peft Flux compatible?

@BenjaminBossan
Copy link
Member

Should we add a special flag for peft Flux compatible?

Before we do that, we need to first understand why this adapter causes the issue, while others work. Then we can think of the best solution. I'll take a look at it when I have a bit of time on my hands.

@BenjaminBossan
Copy link
Member

I have a bit of time to investigate the issue this week. Do you know of a publicly available LoRA Flux adapter that causes the issue you described (only safetensors)? That way, I can try to reproduce the error.

Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants