Computing nn.Module inverse or Backward pass

I think of an nn.Module to be a function f that transforms an input x into some output y as y = f(x). Assuming all my layers are invertible, is there a direct way to compute the backward pass for x = f^-1(y)?

Hi Zeeshan!

If your layers are indeed invertible and I interpret “compute the backward pass”
to mean compute the Jacobian of f^-1, then yes, this Jacobian is the matrix
inverse of the Jacobean of f, J[f^-1] = (J[f])^-1.

This statement, together with important relevant context, can be found in
Wikipedia’s inverse-function-theorem entry.

If instead you are asking how to compute f^-1 itself, that, in general, can be
difficult. You could either compute the inverses of each of your layers separately
and then chain those inverses together, or you would compute the inverse of f,
x = f^-1 (y), by numerically solving for x in f (x) = y.

Best.

K. Frank

Thank you for the reply. I was asking about the backward pass f^-1, but thanks for pointing the inverse-function-theorem. Is there a way to use Pytorch’s autodiff for such a computation?

Hi Zeeshan!

You may use jacobian() or jacrev() to compute the Jacobian of your Module and
inv() to compute its inverse. Note the Jacobian will be returned as a tuple of tensors
and you will have to reorganize it into a two-dimensional matrix in order to pass it
to inv().

Bear in mind that the mapping from input to output for a typical neural network
will not be invertible. Unless your neural network / Module is in fact invertible
you won’t be able to invert the Jacobian.

Best.

K. Frank