-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the GPU consumption trained on ImageNet #12
Comments
just realize mainly the text encoder causes the problem :( hmm, it's a tough question then ideally you would need a GPU with 32GB memory for 16 GB GPUs you might need to modify the source code, somehow (no idea what to do, yet) |
Yes, it exceeds 16GB even using |
@KaiyangZhou 麻烦问一下论文中res50在imagenet上的实验,需要几张卡,memory需要多大?一般训练多久? |
|
@KaiyangZhou |
The pre-trained weights have just been released. Please see the readme file |
Thank you for sharing the code, |
basically the memory consumption increases with the number of classes (for prompt learning methods) 24gb might be sufficient for most datasets except imagenet |
Hi Kaiyang, I think with a slight modification, the model could run on ImageNet if using more than one graphic cards. For CoOp, just change the code on line 257 of coopy.py from "self.model = nn.DataParallel(self.model)" to "self.model.text_encoder = nn.DataParallel(self.model.text_encoder)". I tested it on 1-shot ImageNet with 4 graphic cards, and found that each card only consumed around 8GB memory. Compared with my previous experiments on a single card, the accuracy even increases a little bit :) The image_encoder does not need to be run parallelly at least for CoOp because it does not require backward propagation and only has a small batch size.
|
I had to say the way using DataParallel on CustomCLIP cannot save GPU memory on each card at all... |
No. DataParallel won't help. The problem for imagenet is that the 1,000 classes would create a huge memory consumption for the text encoder. So for smaller datasets with fewer classes, the problem is gone. And the tricky thing is, you can't split the classes into two gpus because in doing so the attention in the transformer model won't work properly. Hmm ... |
In fact, DataParallel could save memory if used correctly. To be short, it is not the class tensors in the last layer themselves caused the problem, but the intermediate tensors, gradients and the computation graph. If the code is modified as mentioned above, all the intermediate tensors and most of the gradients could be scattered to different GPUs. And finally, when the model on each card get the final text features, we can collect them to a single card then. |
Emm, I am not sure whether you noticed. I indeed have run experiments and verified the method's effectiveness... I tested it on 1-shot ImageNet with 4 graphic cards, and found that each card only consumed around 8GB memory. |
I do not understand why the attention in the transformer model would have problems. I think the attention mechanism only work within each sample, and there is no attention between samples in a batch. |
Oh, my bad. I thought you were talking about another approach. The attention thing doesn't matter then. |
I want to know why doesn't CoOp have this problem? |
I want to know why doesn't CoOp have this problem? one RTX3090 24Gb is enough for coop |
when training on 1000 classes imagenet, the GPU memory of prompts seems very large and results in the Out Of Memory error on 16GB GPU Card.
How to solve this problem?
The text was updated successfully, but these errors were encountered: