You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I haven't tested enable_sequential_cpu_offload on stable-diffusion-vsd-guidance yet. I'll label this issue as bug but it's of low priority. Which model of GPU are you using for running this? Also note that data.batch_size under the multi-GPU setting is the batch size for each GPU, so if you want to reduce VRAM usage, please use data.batch_size=1.
I was curious about this also i currently only have a NVIDIA GeForce RTX 3060 12GiB vram. with AMD Ryzen 7 5700G with Radeon Graphics × 16 Memory:31.1 GiB available. I was curious on how to utilize my CPU effectively to run larger models such as the dreamfusion-if.yaml that I am currently not capable of running? any suggestion would be helpful.
python launch.py --config configs/prolificdreamer.yaml --train --gpu 5,6 system.prompt_processor.prompt="a DSLR photo of a blue car" data.batch_size=2 data.width=128 data.height=128 system.guidance.enable_sequential_cpu_offload=true
the error :
NotImplementedError: Cannot copy out of meta tensor; no data!
The text was updated successfully, but these errors were encountered: