Is possible to calculate the second-order derivative of a custom CUDA function?
When I try to calculate it with a twice backward technique, I got the error as follows.
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
I guess the reason is the computation graph is not created while backward pass in the custom CUDA function…
How do you define your custom CUDA function?
If you use autograd.Function in python to create new autograd Functions, you need to make sure that your backward can be auto-diffed. If it cannot, then your backward should call the apply of another Function that corresponds to your backward and double-backward.