-
-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Torch.jl without a GPU? #20
Comments
For example, this is what I currently get when I try to run julia> import Torch
[ Info: Precompiling Torch [6a2ea274-3061-11ea-0d63-ff850051a295]
ERROR: LoadError: LoadError: could not load library "libdoeye_caml"
dlopen(libdoeye_caml.dylib, 1): image not found
Stacktrace:
[1] macro expansion at /Users/dilum/.julia/packages/Torch/Q8Y45/src/error.jl:12 [inlined]
[2] at_grad_set_enabled(::Int64) at /Users/dilum/.julia/packages/Torch/Q8Y45/src/wrap/libdoeye_caml_generated.jl:70
[3] top-level scope at /Users/dilum/.julia/packages/Torch/Q8Y45/src/tensor.jl:6
[4] include(::Function, ::Module, ::String) at ./Base.jl:380
[5] include at ./Base.jl:368 [inlined]
[6] include(::String) at /Users/dilum/.julia/packages/Torch/Q8Y45/src/Torch.jl:1
[7] top-level scope at /Users/dilum/.julia/packages/Torch/Q8Y45/src/Torch.jl:25
[8] include(::Function, ::Module, ::String) at ./Base.jl:380
[9] include(::Module, ::String) at ./Base.jl:368
[10] top-level scope at none:2
[11] eval at ./boot.jl:331 [inlined]
[12] eval(::Expr) at ./client.jl:467
[13] top-level scope at ./none:3
in expression starting at /Users/dilum/.julia/packages/Torch/Q8Y45/src/tensor.jl:6
in expression starting at /Users/dilum/.julia/packages/Torch/Q8Y45/src/Torch.jl:25
ERROR: Failed to precompile Torch [6a2ea274-3061-11ea-0d63-ff850051a295] to /Users/dilum/.julia/compiled/v1.6/Torch/2cR1S_xGAhl.ji.
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1290
[3] _require(::Base.PkgId) at ./loading.jl:1030
[4] require(::Base.PkgId) at ./loading.jl:928
[5] require(::Module, ::Symbol) at ./loading.jl:923 |
Yes, this is due to using the GPU binaries from torch and having the wrapper rely on cuda being available. We just need to build a |
@DhairyaLGandhi Is the following error also linked to the absence of GPU? `julia> using Torch /home/ssing/.julia/artifacts/d6ce2ca09ab00964151aaeae71179deb8f9800d1/lib/libtorch.so: undefined symbol: _ZN3c1016C10FlagsRegistryB5cxx11Ev |
I mainly develop on my laptop, it doesn't have a cuda gpu, so.... i can't import this unless the CPU version is compiled for. |
I tried to compile cpu-only version of pytorch and Torch.jl, using the wizard. I followed this "script": https://github.com/JuliaPackaging/Yggdrasil/blob/master/T/Torch/build_tarballs.jl I ran this line: but got this error |
You might need to make it so that the CUDA deps don't leak into the build process. The CMakeLists, and the CUDA related packages would need to be put behind a boolean GPU flag. |
I get that, but I don't understand which line caused the CUDA deps. |
This will need the Torch_jll package to be split into Torch_jll (CPU), and Torch_CUDA_jll: JuliaPackaging/Yggdrasil#9785 (comment) Likewise for TorchCAPI_jll. |
Would it be possible to add support for using Torch.jl on a machine without a GPU?
The text was updated successfully, but these errors were encountered: