I work a lot with PyTorch and found it great and useful.
I wanted to know if there exist any way to understand from the doc if specific function as supported backward. For most of the function it is clear by definition, but for few it’s not so clear and sometime depend on the implementation.
I would assume that the majority of mathematically differentiable functions are also differentiable in PyTorch, but to make sure that’s the case, you could check the
.grad_fn attribute of the output using inputs, which require gradients e.g. as seen here:
x = torch.randn(10, 10, requires_grad=True)
out = torch.sum(x, dim=1)
> <SumBackward1 object at 0x7f79da0b9520> # differentiable
out = torch.argmax(x, dim=1)
> None # not differentiable
Thanks for your replay.
This is exactly what I am doing.
I think it worth adding it to the doc of each function (or just specify when it’s not differentiable).
What is the way to suggest this idea? Should I open I issue in git?
Sure, you can create a feature request on GitHub. Would you be interested in working on it as well?
I don’t have any experience to contribute to such a big library, but I will be happy to help.
Note that there are several subtleties here.
In the first layer we have the following:
- There are directly differentiable functions (per tools/autograd/derivatives.yaml), these are the easy ones. For those, there is a backward (somewhere).
- Then there are functions that reduce to directly differentiable functions (e.g.
einsum). This means that they are calling these directly differentiable functions and autograd will do its autograd thing to do the differentiation.
- Some of these compositions will go through unexposed/internal functions (e.g. convolutions, ctc loss etc.).
- The non-differentiable functions are the remainder…
But then there are second derivatives, where the same scheme applies, and there is the question of “differentiable w.r.t. which arguments?”.