-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PatchCAM in CUB and ImageNet. #6
Comments
Does the patchCAM refer to the initial attention map from Transformer, e.g. Eq(1) in our main paper? |
|
Thank you for your patient answers.I learned a lot. |
PatchCAM refers to the CAM generated by patch token. I feel strange about the activation of patchCAM on the CUB dataset, it seems to have higher activation for the whole image (almost the whole CAM is red). But in particular, the activation of the foreground object is relatively weaker than that of the background. patchCAM does not seem to provide class-specific semantic information on CUB. But on ImageNet, patchCAM seems to be normal again, such as strong foreground object activation and weak background activation, which can provide category-specific semantic information. I wonder why such an interesting phenomenon occurs.
The text was updated successfully, but these errors were encountered: