-
Hello, It seems like ONNX runtime for GPU is requiring NVIDIA's -devel packages, instead of the -runtime. Can anyone confirm if that is true? For example, using the pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime from DockerHub as the base image to serve models with onnxruntime results in a failure on import because it seems like the *-runtime package does not actually contain libcudnn binary
The *-devel packages are quite a bit larger, but contain the nvcc, etc. I rather avoid using them, if possible. Can anyone help me understand what exactly the issue is here? Does ONNX require nvcc when doing inference only? Thank you. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
Yes, you are right. A recent change #7110 requires the dev packages. Before that, I think it doesn't. See: https://github.com/microsoft/onnxruntime/blob/master/dockerfiles/Dockerfile.cuda . Onnx runtime works well with the runtime only nvidia docker image. |
Beta Was this translation helpful? Give feedback.
-
@snnn, #7110 did not introduce that, cupti comes with cuda toolkit by default. |
Beta Was this translation helpful? Give feedback.
Yes, you are right. A recent change #7110 requires the dev packages.
Before that, I think it doesn't. See: https://github.com/microsoft/onnxruntime/blob/master/dockerfiles/Dockerfile.cuda . Onnx runtime works well with the runtime only nvidia docker image.