Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cuda out of memory in semantic loss training #326

Open
DariusNafar opened this issue Feb 17, 2022 · 1 comment
Open

Cuda out of memory in semantic loss training #326

DariusNafar opened this issue Feb 17, 2022 · 1 comment
Assignees

Comments

@DariusNafar
Copy link
Collaborator

@hfaghihi15 check this log out:

a116599

semantic loss causes Cuda out of memory even with very small batches even on Avicenna after a small number of iterations (<10).
if you want to get the same error run Chen's code in his branch:

https://github.com/HLR/DomiKnowS/tree/chen_zheng_procedural_text

with this command:
python WIQA_aug.py --cuda 0 --epoch 10 --lr 2e-7 --samplenum 1000000000 --batch 2 --beta 1.0 --semantic_loss True

@hfaghihi15
Copy link
Collaborator

Hi @AdmiralDarius, is this problem resolved? Could you elaborate on what the issue was and how it was handled?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants