Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory check before inference to avoid VAE Decode using exceeded VRAM. #5745

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

wl2018
Copy link

@wl2018 wl2018 commented Nov 23, 2024

Check if free memory is not less than expected before doing actual decoding, and if it fails, try to free for required amount of memory, and if it still fails, switch to tiled VAE decoding directly.

It seems PyTorch may continue occupying memory until the model is destroyed after OOM occurs. This commit tries to avoid OOM from happening in the first place for VAE Decode.

This is for VAE Decode ran with exceeded VRAM from #5737.

@wl2018 wl2018 force-pushed the pr-20241123_VAE-Decode-improvements branch 2 times, most recently from e85d80f to 58eb317 Compare November 23, 2024 15:17
@wl2018 wl2018 changed the title Memory check before inference to avoid VAE Decode using exceeded RAM. Memory check before inference to avoid VAE Decode using exceeded VRAM. Nov 23, 2024
comfy/sd.py Outdated
logging.debug(f"Free memory: {free_memory} bytes, predicted memory useage of one batch: {memory_used} bytes")
if free_memory < memory_used:
logging.debug("Possible out of memory is detected, try to free memory.")
model_management.free_memory(memory_used, self.device, [self.patcher])
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the load_models_gpu function aleady calls free_memory.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got that. So there's no need for two checks and additional free_memory.

comfy/sd.py Outdated
logging.debug(f"Free memory: {free_memory} bytes")
if free_memory < memory_used:
logging.warning("Warning: Out of memory is predicted for regular VAE decoding, directly switch to tiled VAE decoding.")
predicted_oom = True
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for actually trying is because the memory estimation might not be accurate and will overestimate the amount of memory so it is better to try the decoding.

The proper way to solve the issue would be to free the memory properly on OOM before doing tiled decode.

Copy link
Author

@wl2018 wl2018 Nov 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, so we need to know how to properly free the VRAM after OOM first...
I don't think it's suitable to just destroy entire model object and then reload it.

But actually sometimes I found OOM didn't occur at all, and it just continued running and consumed a lot of shared GPU memory and became very slow. This happened randomly.

Another point is what's the drawback to use tiled decode? Looks like tiled decode isn't slow, at least on my computer. So why we so care about overestimate?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tiled decode gives lower quality images/videos.

Check if free memory is not less than expected before doing actual decoding,
and if it fails, switch to tiled VAE decoding directly.

It seems PyTorch may continue occupying memory until the model is destroyed
after OOM occurs. This commit tries to avoid OOM from happening in the first
place for VAE Decode.

This is for VAE Decode ran with exceeded VRAM from comfyanonymous#5737.
@wl2018 wl2018 force-pushed the pr-20241123_VAE-Decode-improvements branch from 58eb317 to a3b9b3c Compare November 24, 2024 10:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants