Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vgg16-cifar10 - on Macbook Pro 16 M2 Pro #31

Open
ITWOI opened this issue Feb 11, 2023 · 0 comments
Open

vgg16-cifar10 - on Macbook Pro 16 M2 Pro #31

ITWOI opened this issue Feb 11, 2023 · 0 comments

Comments

@ITWOI
Copy link

ITWOI commented Feb 11, 2023

Hi,
I just ran this on my Macbook Pro 16 M2 Pro, and here is the results.

torch 1.13.1
device mps
Using downloaded and verified file: data/cifar-10-python.tar.gz
Extracting data/cifar-10-python.tar.gz to data
Downloading: "https://github.com/pytorch/vision/zipball/v0.11.0" to /Users/wangyu/.cache/torch/hub/v0.11.0.zip
/Users/wangyu/Documents/python3ve/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/Users/wangyu/Documents/python3ve/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
  warnings.warn(msg)
Epoch: 001/001 | Batch 0000/1406 | Loss: 2.4328
Epoch: 001/001 | Batch 0100/1406 | Loss: 2.2731
Epoch: 001/001 | Batch 0200/1406 | Loss: 1.9646
Epoch: 001/001 | Batch 0300/1406 | Loss: 1.8460
Epoch: 001/001 | Batch 0400/1406 | Loss: 1.9599
Epoch: 001/001 | Batch 0500/1406 | Loss: 1.9507
Epoch: 001/001 | Batch 0600/1406 | Loss: 2.0058
Epoch: 001/001 | Batch 0700/1406 | Loss: 1.7141
Epoch: 001/001 | Batch 0800/1406 | Loss: 1.9580
Epoch: 001/001 | Batch 0900/1406 | Loss: 1.5622
Epoch: 001/001 | Batch 1000/1406 | Loss: 1.6903
Epoch: 001/001 | Batch 1100/1406 | Loss: 1.8572
Epoch: 001/001 | Batch 1200/1406 | Loss: 1.5994
Epoch: 001/001 | Batch 1300/1406 | Loss: 1.9531
Epoch: 001/001 | Batch 1400/1406 | Loss: 1.5582
Time / epoch without evaluation: 19.83 min
Epoch: 001/001 | Train: 40.06% | Validation: 41.60% | Best Validation (Ep. 001): 41.60%
Time elapsed: 25.22 min
Total Training Time: 25.22 min
Test accuracy 40.97%
Total Time: 28.94 min
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant