You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello @vidarsumo
Looking for device = in the code gives you the clue that the code seems pretty complete in sending all tensor to device = "gpu" everywhere. But we may have miss something.
Currently the CI/CD workflow of the package is not run on GPU.
But on cloud, there is always a chance that the GPU may not be detected depending on virtualization stack. Can you confirm
you get the expected result from torch::cuda_is_available() and torch::cuda_device_count()
you configured an explicit device = "cuda" in the tft config parameter ?
Hope it helps
Hello @vidarsumo
Sorry, my mistake, device = "cuda" is not available on the user API, and is poorly used and configured in the code.
We definitively need to improve on that.
I tested TFT on a Azure DSVM which has CUDA and cuDNN installed (https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/tools-included) but it did not use the GPU (V100).
Do I have to do something so TFT uses the GPU?
The text was updated successfully, but these errors were encountered: