Have autograd.grad in forward for torch.ONNX

Hi,

I have a model in Pytorch, where the output of the forward is using autograd.grad(). I want to export this model to ONNX using torch.onnx.export but ONNX has some problems with handling autograd.grad() in the forward.

Bug I found:
If I run the model on the GPU, I can export the model by setting the forward like this:

def forward(x,y):
  **some calculates resulting in z**
  z = autograd.grad(z, y,retain_graph=False,create_graph=False,
                                    grad_outputs=torch.ones_like(y))[0]
  return  0*(x+y)+z

if I just return z then I get this error:

onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:pT

pT is the name of x. That is fine as add 0 * (x+y) to the output removes this error and exports the model to ONNX.

The issue that I have:
So I found a workaround when exporting the model to ONNX with autograd.grad as seen above. But my issue is now precision. When exporting to ONNX (when the model is on the GPU), the precision is VERY poor (sometimes 1000% off)… If I export the exact same code/model but change the init device to the CPU, I get this error:

RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient
Tensor:
1.8547
1.2683
[ torch.FloatTensor{2,1} ]

1.8547 1.2683 is just a dummy input.
This error comes from the autograd.grad as if I remove it in the forward, the model is exported.

My question/hope:
How should you properly export a model to ONNX, that calculates the gradient with respect to the input? Or if that isn’t recommend at the moment, could it be added to torch.onnx? :crossed_fingers:

Thank you for your time and hope everything makes sense!