We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello, good time i have exported jit script file using this code : https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/zipformer/export.py this is for my ASR project ,but i have this problem when i want to start Triton Server :
root@workstation-003:/workspace/Triton-ASR-Client/jit_config_files/model# tritonserver --model-repository=/workspace/Triton-ASR-Client/jit_config_files --http-port=8008 --grpc-port=8009 --metrics-port 8010 I0126 14:16:30.950672 2312 pinned_memory_manager.cc:277] "Pinned memory pool is created at '0x7f017e000000' with size 268435456" I0126 14:16:30.951132 2312 cuda_memory_manager.cc:107] "CUDA memory pool is created on device 0 with size 67108864" I0126 14:16:30.957236 2312 model_lifecycle.cc:472] "loading: model:1" I0126 14:16:32.251723 2312 libtorch.cc:2547] "TRITONBACKEND_Initialize: pytorch" I0126 14:16:32.251784 2312 libtorch.cc:2557] "Triton TRITONBACKEND API version: 1.19" I0126 14:16:32.251805 2312 libtorch.cc:2563] "'pytorch' TRITONBACKEND API version: 1.19" I0126 14:16:32.252781 2312 libtorch.cc:2596] "TRITONBACKEND_ModelInitialize: model (version 1)" W0126 14:16:32.253609 2312 libtorch.cc:329] "skipping model configuration auto-complete for 'model': not supported for pytorch backend" I0126 14:16:32.254088 2312 libtorch.cc:358] "Optimized execution is enabled for model instance 'model'" I0126 14:16:32.254099 2312 libtorch.cc:377] "Cache Cleaning is disabled for model instance 'model'" I0126 14:16:32.254111 2312 libtorch.cc:394] "Inference Mode is enabled for model instance 'model'" I0126 14:16:32.254123 2312 libtorch.cc:413] "cuDNN is enabled for model instance 'model'" I0126 14:16:32.254233 2312 libtorch.cc:2640] "TRITONBACKEND_ModelInstanceInitialize: model_0_0 (CPU device 0)" I0126 14:16:34.870421 2312 libtorch.cc:2674] "TRITONBACKEND_ModelInstanceFinalize: delete instance state" terminate called after throwing an instance of 'c10::Error' what(): Method 'forward' is not defined. Exception raised from get_method at /tmp/tritonbuild/pytorch/build/include/torch/torch/csrc/jit/api/object.h:111 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x98 (0x7f01bc3de4b8 in /opt/tritonserver/backends/pytorch/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xe0 (0x7f01bc38c842 in /opt/tritonserver/backends/pytorch/libc10.so) frame #2: + 0x3853a (0x7f01bc53753a in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so) frame #3: + 0x22504 (0x7f01bc521504 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so) frame #4: + 0x28670 (0x7f01bc527670 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so) frame #5: + 0x288d2 (0x7f01bc5278d2 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so) frame #6: TRITONBACKEND_ModelInstanceInitialize + 0x491 (0x7f01bc527db1 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so) frame #7: + 0x1acc4f (0x7f01c5917c4f in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #8: + 0x1ade97 (0x7f01c5918e97 in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #9: + 0x190265 (0x7f01c58fb265 in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #10: + 0x1908b6 (0x7f01c58fb8b6 in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #11: + 0x19d22d (0x7f01c590822d in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #12: + 0x99ee8 (0x7f01c4f6bee8 in /usr/lib/x86_64-linux-gnu/libc.so.6) frame #13: + 0x1869bb (0x7f01c58f19bb in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #14: + 0x197e4a (0x7f01c5902e4a in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #15: + 0x19c67c (0x7f01c590767c in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #16: + 0x29764d (0x7f01c5a0264d in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #17: + 0x29ac2c (0x7f01c5a05c2c in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #18: + 0x3f7272 (0x7f01c5b62272 in /opt/tritonserver/bin/../lib/libtritonserver.so) frame #19: + 0xdc253 (0x7f01c51d7253 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6) frame #20: + 0x94ac3 (0x7f01c4f66ac3 in /usr/lib/x86_64-linux-gnu/libc.so.6) frame #21: clone + 0x44 (0x7f01c4ff7a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
Aborted (core dumped)
-> The exported model is working well in other environments, but how can i deploy it here ? -> Method 'forward' is not defined. <- 🙏
The text was updated successfully, but these errors were encountered:
The torch.jit.ignore here seems like the issue to me. Triton's PyTorch backend needs forward function to be able to execute inference.
Sorry, something went wrong.
No branches or pull requests
Hello, good time
i have exported jit script file using this code : https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/zipformer/export.py
this is for my ASR project ,but i have this problem when i want to start Triton Server :
root@workstation-003:/workspace/Triton-ASR-Client/jit_config_files/model# tritonserver --model-repository=/workspace/Triton-ASR-Client/jit_config_files --http-port=8008 --grpc-port=8009 --metrics-port 8010
I0126 14:16:30.950672 2312 pinned_memory_manager.cc:277] "Pinned memory pool is created at '0x7f017e000000' with size 268435456"
I0126 14:16:30.951132 2312 cuda_memory_manager.cc:107] "CUDA memory pool is created on device 0 with size 67108864"
I0126 14:16:30.957236 2312 model_lifecycle.cc:472] "loading: model:1"
I0126 14:16:32.251723 2312 libtorch.cc:2547] "TRITONBACKEND_Initialize: pytorch"
I0126 14:16:32.251784 2312 libtorch.cc:2557] "Triton TRITONBACKEND API version: 1.19"
I0126 14:16:32.251805 2312 libtorch.cc:2563] "'pytorch' TRITONBACKEND API version: 1.19"
I0126 14:16:32.252781 2312 libtorch.cc:2596] "TRITONBACKEND_ModelInitialize: model (version 1)"
W0126 14:16:32.253609 2312 libtorch.cc:329] "skipping model configuration auto-complete for 'model': not supported for pytorch backend"
I0126 14:16:32.254088 2312 libtorch.cc:358] "Optimized execution is enabled for model instance 'model'"
I0126 14:16:32.254099 2312 libtorch.cc:377] "Cache Cleaning is disabled for model instance 'model'"
I0126 14:16:32.254111 2312 libtorch.cc:394] "Inference Mode is enabled for model instance 'model'"
I0126 14:16:32.254123 2312 libtorch.cc:413] "cuDNN is enabled for model instance 'model'"
I0126 14:16:32.254233 2312 libtorch.cc:2640] "TRITONBACKEND_ModelInstanceInitialize: model_0_0 (CPU device 0)"
I0126 14:16:34.870421 2312 libtorch.cc:2674] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
terminate called after throwing an instance of 'c10::Error'
what(): Method 'forward' is not defined.
Exception raised from get_method at /tmp/tritonbuild/pytorch/build/include/torch/torch/csrc/jit/api/object.h:111 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x98 (0x7f01bc3de4b8 in /opt/tritonserver/backends/pytorch/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xe0 (0x7f01bc38c842 in /opt/tritonserver/backends/pytorch/libc10.so)
frame #2: + 0x3853a (0x7f01bc53753a in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #3: + 0x22504 (0x7f01bc521504 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #4: + 0x28670 (0x7f01bc527670 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #5: + 0x288d2 (0x7f01bc5278d2 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #6: TRITONBACKEND_ModelInstanceInitialize + 0x491 (0x7f01bc527db1 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #7: + 0x1acc4f (0x7f01c5917c4f in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #8: + 0x1ade97 (0x7f01c5918e97 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #9: + 0x190265 (0x7f01c58fb265 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #10: + 0x1908b6 (0x7f01c58fb8b6 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #11: + 0x19d22d (0x7f01c590822d in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #12: + 0x99ee8 (0x7f01c4f6bee8 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #13: + 0x1869bb (0x7f01c58f19bb in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #14: + 0x197e4a (0x7f01c5902e4a in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #15: + 0x19c67c (0x7f01c590767c in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #16: + 0x29764d (0x7f01c5a0264d in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #17: + 0x29ac2c (0x7f01c5a05c2c in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #18: + 0x3f7272 (0x7f01c5b62272 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #19: + 0xdc253 (0x7f01c51d7253 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #20: + 0x94ac3 (0x7f01c4f66ac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #21: clone + 0x44 (0x7f01c4ff7a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
Aborted (core dumped)
-> The exported model is working well in other environments, but how can i deploy it here ?
-> Method 'forward' is not defined. <-
🙏
The text was updated successfully, but these errors were encountered: