Replies: 1 comment 1 reply
-
Yea technically that should be no problem. Unfortunately I don't have a SLI configuration to test with and I've only been using Python and torch/cuda for a few weeks so my knowledge isn't high enough yet to make such a change blindly without testing a bunch of stuff to see how it works. But theoretically you should just be able to run the for loop in parallel since there's no write overlap between the work done |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
If I understand the changes you've made by splitting up the work... How hard would it be to send segments to different GPUs at the same time, rather than serially feeding a single GPU? Thinking of those of us with used K80s or SLI-like configurations.
Beta Was this translation helpful? Give feedback.
All reactions