You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I ran the vgg16-cifar10.py benchmark on torch version 1.13.1 on my M1 Max MBP with 24 GPU cores and am getting better results than the M1 Max MBP (32 GPU cores) in your blog post (17.88 vs. 31.54 minutes). I also ran it on 1.13.0 and got similar results, so perhaps the stable release of 1.13.0 incorporated some optimizations/fixed some issues. I unfortunately couldn't find the nightly build you used in the blog post.
Just wanted to flag in case folks wanted to compare results for the new Apple Silicon chips (M2 Pro/Max) -- the results may not be comparable.
Hi, I ran the
vgg16-cifar10.py
benchmark ontorch
version1.13.1
on my M1 Max MBP with 24 GPU cores and am getting better results than the M1 Max MBP (32 GPU cores) in your blog post (17.88 vs. 31.54 minutes). I also ran it on1.13.0
and got similar results, so perhaps the stable release of1.13.0
incorporated some optimizations/fixed some issues. I unfortunately couldn't find the nightly build you used in the blog post.Just wanted to flag in case folks wanted to compare results for the new Apple Silicon chips (M2 Pro/Max) -- the results may not be comparable.
On
1.13.1
:The text was updated successfully, but these errors were encountered: