Does anyone know whether I could use f2py during training?. Idea is to use model’s output as an input in the fortran subroutine which I can import in python via numpy’s f2py. The question is whether the computation graph would continue once I do operations within the subroutine?

Hi Ivan!

I don’t know anything about f2py, but if I understand your proposal, no,

the computation graph won’t continue through the call into numpy / f2py,

which is to say, you won’t be able to backpropagate gradients through

the numpy / f2py call.

To be able to use pytorch’s autograd for backpropagation, you have two

choices:

You could translate the fortran code into pytorch tensor operations, after

which you will get autograd and backpropagation “for free.”

Or you could write a custom autograd function. In this case, you would

implement a `forward()`

function that computes the desired function and

could be implemented using numpy / f2py. But you also have to implement

the companion `backward()`

function that (roughly speaking) computes

the gradient of your function.

To write `backward()`

, you would typically differentiate your function

analytically, and then implement the numerical evaluation of that analytic

derivative. (You could also differentiate it numerically.) But it would be

perfectly fine to implement the derivative evaluation that forms the guts of

`backward()`

in fortran via numpy / f2py (with a wrapper that converts its

inputs and outputs from and to pytorch tensors).

Best.

K. Frank