-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too Slow training #31
Comments
Hi LaFeuilleMorte, Indeed, the training time seems very long, 50 iterations should be very short at the beginning of training (0.06 minutes) and get to 0.2 minutes max after starting the surface regularization. I have several questions for you:
|
我的屌丝配置计算机: (2016,personal GPU workstation) the training speed is acceptable,15000 iterations, about dozens of minutes. |
To my best understanding of the code, it took too much time on the function "coarse_training_with_density_regularization". |
I also have this very same issue. However, I'm only using a GeForce RTX 2060, and it only has 14 GB of VRAM, so that might be my issue (as opposed to an issue with the repository) |
Looking into this issue a little more, I want to ask: @LaFeuilleMorte, what is your GPU utilization versus GPU memory usage? When running my model, it seems that almost the entirety of the memory is used, but the GPU itself is doing almost no work at all. I theorize that this could be because the CPU isn't getting information to the GPU fast enough, and so the bottleneck is the CPU. Looking at the code, it seems that the model is only trained a single image at a time (i.e. the batch size is 1). I wonder if this is why the GPU has nothing to do. I tried changing the following parameter to a larger number of images, but it seems that at some point during development, this value was fixed to 1, as I get the following error if I try to change it. |
it looks that the GS does not support batch. |
Hi, Thanks for your great work and the open source code. I encountered too slow training when doing training with my RTX 3090 machine. And it will take 5~6 minutes to do 50 iterations (8000 in total). And the whole training would take like over 10 hours. That's way longer than what it is in the paper. Am I miss doing something?

The text was updated successfully, but these errors were encountered: