You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the help with the previous question. I have prepared the imagenet data and it is ready to train. I just wonder what is stop criteria for pretraining? I have not found any graph for pretraining in the paper. There is only one metric, center weighted MSE loss, for pretraining. Center weighted MSE loss does not give straight hint on how well it will do for counting.
So I just wonder when you were doing pretraining, were you simply check if validation loss was not able to go down for some number of epochs?
Just started pretraining. With BS = 64, EPOCHS = 25, it will finish training pretty quickly, as there are only 30 classes. It does not feel right to me. Do you remember roughly the configuration you use for pretraining? I can see from - https://github.com/erikalu/class-agnostic-counting/blob/master/src/main.py
It is steps_per_epoch=600 and epochs=36. As in each class, there are multiple videos, so it is reasonable to randomly pick multiple times.
By the way, I was trying to convert Keras model to Pytorch, but It was not successful - https://github.com/weicheng113/class-agnostic-counting-pytorch/blob/master/keras2pytorch/keras2pytorch.py. I tested with the demo.py given by official repo. The result was completely different. I checked the weights, which were the same. I had a difficulty in checking input and output of nested layer in Keras, so I was not able to figure out the cause.
Thanks,
Cheng
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for the help with the previous question. I have prepared the imagenet data and it is ready to train. I just wonder what is stop criteria for pretraining? I have not found any graph for pretraining in the paper. There is only one metric, center weighted MSE loss, for pretraining. Center weighted MSE loss does not give straight hint on how well it will do for counting.
So I just wonder when you were doing pretraining, were you simply check if validation loss was not able to go down for some number of epochs?
Just started pretraining. With
BS = 64, EPOCHS = 25
, it will finish training pretty quickly, as there are only 30 classes. It does not feel right to me. Do you remember roughly the configuration you use for pretraining? I can see from - https://github.com/erikalu/class-agnostic-counting/blob/master/src/main.pyIt is
steps_per_epoch=600
and epochs=36. As in each class, there are multiple videos, so it is reasonable to randomly pick multiple times.By the way, I was trying to convert Keras model to Pytorch, but It was not successful - https://github.com/weicheng113/class-agnostic-counting-pytorch/blob/master/keras2pytorch/keras2pytorch.py. I tested with the demo.py given by official repo. The result was completely different. I checked the weights, which were the same. I had a difficulty in checking input and output of nested layer in Keras, so I was not able to figure out the cause.
Thanks,
Cheng
The text was updated successfully, but these errors were encountered: