Replies: 1 comment
-
Hi there, while tcnn doesn't directly support losses w.r.t. input derivatives, @ventusff recently added support for double-backward in the case of the hash encoding: NVlabs/tiny-cuda-nn#69 You could use this as a building block to extend support to tcnn's neural networks, or you could use it via PyTorch bindings in combination with a PyTorch-MLP. Cheers! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
Thank you for sharing your work. I am interested in incorporating other kinds of losses in SDF training that include the input derivatives, for example the regularization loss proposed in this paper. Is this possible to achieve with tcnn? Should I open a feature request there?
Thanks in advance
Beta Was this translation helpful? Give feedback.
All reactions