Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dinov2 Traing error #492

Open
wyh196646 opened this issue Jan 4, 2025 · 1 comment
Open

Dinov2 Traing error #492

wyh196646 opened this issue Jan 4, 2025 · 1 comment

Comments

@wyh196646
Copy link

wyh196646 commented Jan 4, 2025


I20250104` 12:12:42 90 dinov2 helpers.py:102] Training [ 1250/2500000] eta: 40 days, 13:22:08 lr: 0.0006 (0.0003) wd: 0.0400 (0.0400) mom: 0.9940 (0.9940) last_layer_lr: 0.0000 (0.0000) current_batch_size: 380.0000 (380.0000) total_loss: 6.7223 (inf) dino_local_crops_loss: 4.9698 (5.1138) dino_global_crops_loss: 0.6292 (0.6445) koleo_loss: 0.0315 (inf) ibot_loss: 1.0857 (1.1301) time: 1.363414 data: 0.000342 max mem: 61695
I20250104 12:12:55 90 dinov2 helpers.py:102] Training [ 1260/2500000] eta: 40 days, 13:03:52 lr: 0.0006 (0.0003) wd: 0.0400 (0.0400) mom: 0.9940 (0.9940) last_layer_lr: 0.0006 (0.0000) current_batch_size: 380.0000 (380.0000) total_loss: 6.7171 (inf) dino_local_crops_loss: 4.9707 (5.1126) dino_global_crops_loss: 0.6294 (0.6444) koleo_loss: 0.0315 (inf) ibot_loss: 1.0856 (1.1297) time: 1.356799 data: 0.000374 max mem: 61695
I20250104 12:13:10 90 dinov2 helpers.py:102] Training [ 1270/2500000] eta: 40 days, 13:28:36 lr: 0.0006 (0.0003) wd: 0.0400 (0.0400) mom: 0.9940 (0.9940) last_layer_lr: 0.0006 (0.0000) current_batch_size: 380.0000 (380.0000) total_loss: 6.7171 (inf) dino_local_crops_loss: 4.9704 (5.1115) dino_global_crops_loss: 0.6289 (0.6443) koleo_loss: 0.0315 (inf) ibot_loss: 1.0836 (1.1294) time: 1.412941 data: 0.000429 max mem: 61695
I20250104 12:13:24 90 dinov2 helpers.py:102] Training [ 1280/2500000] eta: 40 days, 13:25:48 lr: 0.0006 (0.0003) wd: 0.0400 (0.0400) mom: 0.9940 (0.9940) last_layer_lr: 0.0006 (0.0000) current_batch_size: 380.0000 (380.0000) total_loss: 6.7192 (inf) dino_local_crops_loss: 4.9700 (5.1104) dino_global_crops_loss: 0.6287 (0.6442) koleo_loss: 0.0315 (inf) ibot_loss: 1.0826 (1.1290) time: 1.436386 data: 0.000424 max mem: 61695
I20250104 12:13:26 90 dinov2 train.py:278] NaN detected
Traceback (most recent call last):
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 322, in
main(args)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 317, in main
do_train(cfg, model, resume=not args.no_resume)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 279, in do_train
raise AssertionError
AssertionError
Traceback (most recent call last):
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 322, in
main(args)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 317, in main
Traceback (most recent call last):
do_train(cfg, model, resume=not args.no_resume)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 279, in do_train
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 322, in
raise AssertionError
AssertionError
main(args)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 317, in main
do_train(cfg, model, resume=not args.no_resume)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 279, in do_train
raise AssertionError
AssertionError
Traceback (most recent call last):
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 322, in
main(args)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 317, in main
do_train(cfg, model, resume=not args.no_resume)
Traceback (most recent call last):
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 279, in do_train
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 322, in
raise AssertionError
AssertionError
main(args)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 317, in main
do_train(cfg, model, resume=not args.no_resume)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 279, in do_train
raise AssertionError
AssertionError
Traceback (most recent call last):
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 322, in
main(args)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 317, in main
do_train(cfg, model, resume=not args.no_resume)
File "/ruiyan/yuhao/project/FMBC/dinov2/dinov2/train/train.py", line 279, in do_train
raise AssertionError
AssertionError
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 95 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 90) of binary: /opt/conda/envs/dinov2/bin/python
Traceback (most recent call last):
File "/opt/conda/envs/dinov2/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/envs/dinov2/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/dinov2/lib/python3.9/site-packages/torch/distributed/launch.py", line 196, in
main()
File "/opt/conda/envs/dinov2/lib/python3.9/site-packages/torch/distributed/launch.py", line 192, in main
launch(args)
File "/opt/conda/envs/dinov2/lib/python3.9/site-packages/torch/distributed/launch.py", line 177, in launch
run(args)
File "/opt/conda/envs/dinov2/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/opt/conda/envs/dinov2/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/envs/dinov2/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

dinov2/train/train.py FAILED

Failures:
[1]:
time : 2025-01-04_12:13:43
host : other-6c52476d-wh8pm
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 91)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2025-01-04_12:13:43
host : other-6c52476d-wh8pm
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 92)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2025-01-04_12:13:43
host : other-6c52476d-wh8pm
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 93)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[4]:
time : 2025-01-04_12:13:43
host : other-6c52476d-wh8pm
rank : 4 (local_rank: 4)
exitcode : 1 (pid: 94)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2025-01-04_12:13:43
host : other-6c52476d-wh8pm
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 90)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
How to slove this error? I have trained dinov2 on my own dataset!🙂

@mc053
Copy link

mc053 commented Jan 21, 2025

I encountered the same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants