Where is (inverse) backwards implemented now?

I’m trying to get a handle of the organization the pytorch code base.

I see in an earlier version where matrix inverse and the backward operation for matrix inverse are implemented:

However, in the new 1.0 version, I can find the implementation for inverse in BatchLinearAlgebra.cpp (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/BatchLinearAlgebra.cpp), but I can’t figure out where the implementation of backwards for matrix inverse is.

Can someone give me a hint where this and other other implementations of backwards are located? The organization of the C++ part of the code base is a bit obscure.

The backward of inverse is not implemented, because all the functions it calls have backward implemented on themselves.

So effectively, it’s similar to how lambda x: torch.add(x, 1).mul(2) does not need backward implemented on it, because .add and .mul have backward defined on them.

It looks like inverse just calls lapack routines. I don’t think it’s calling other pytorch function to implement matrix inverse.

I’m looking at apply_inverse in BatchLinearAlgebra.cpp:185.

I took you down the wrong rabbit-hole and its ENTIRELY my fault, sorry about that.

We do have functions in C+±land whose derivatives are entirely defined by tracing, but this is not such a function.

inverse's derivate is speficied in derivatives.yaml here: https://github.com/pytorch/pytorch/blob/ef487d4f1d7b167dc28976b90e3a728006f37a55/tools/autograd/derivatives.yaml#L396-L397

Thanks! Great. I see. So a bunch of code is being generated from YAML files.