You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Serialization is now much faster because we avoid base64 encoding the serialized tensors. As a result, files serialized with newer versions of torch can't be opened with older versions of torch. Set options(torch.serialization_version = 1) if you want your file to be readable by older versions. (#803)
Deprecated support for CUDA 10.2 on Windows. (#835)
linalg_matrix_rank and linalg_pinv gained atol and rtol arguments while deprecating tol and rcond. (#835)
New features
Improved auto-detection of CUDA version on Windows. (#798, @SvenVw)
Improved parallel dataloaders performance by using a socket conection to transfer data between workers and the main process. (#803)
keep_graph now defaults to the value of create_graph when calling $backward(). We also renamed it to retain_graph to match PyTorch. (#811)
Optimizers created with optimizer now carry the classname in the generator and in instances. Optimizer generators now have the class torch_optimizer_generator. The class of torch optimizers has been renamed from torch_Optimizer to torch_optimizer. (#814)
New utility function nn_prune_head() to prune top layer(s) of a network (#819@cregouby)