R-operator in pytorch?

Is it possible to obtain somewhat like R-op in pytorch (jacobian times vector product)?

See e.g. theano documenation http://deeplearning.net/software/theano/tutorial/gradients.html#r-operator

1 Like

Hi,

The backward engine in pytorch only allows you to perform L-op using backward mode AD.
R-op is tricky for neural nets when using backward mode AD because you don’t want to compute the full Jacobian for memory size reasons. You would need forward mode AD to implement that efficiently which is not implemented.
I am not sure how hard it would be to do a forward mode AD engine for pytorch. But definitely non trivial.

Hi:

Is there any example of L-op implementation in Pytorch?

It seems there is a hope on R-op in Pytorch because
we can implement R-op using twice L-op, as in here.

In fact, @PiotrSokol was kind enough to share his implementation:

Best regards

Thomas

Awesome. It works like a charm :slight_smile:
Thanks a lot @tom @PiotrSokol