Chainer.grad() how to implement this function in Pytorch

I am writing to migrate code from chainer framework to pytorch. I came across the below code:

chainer.grad([loss_func(F.clipped_relu(X2,z=1.0),Yp)], [X2], set_grad=True, retain_grad=True)
I tired looking at the pytorch documentation but was not able to find an equivalent function to implement the above code.

https://docs.chainer.org/en/stable/reference/generated/chainer.grad.html
The above link explains what this function does in chainer framework. I would appreciate if someone could guide me on how to go about migrating the above chainer code to pytorch.

I think torch.autograd.grad would be the corresponding function.

1 Like

Hello, thank you for this answer.

I am able to calculate the gradient using torch.autograd.grad() function but this does not populate .grad field.

Could you tell me how I can populate this feild using the gradient calculate from torch.autograd.grad()?

Any help will be highly appreciated

If you want to populate the .grad attribute of specific parameters, you could use the inputs argument in the backward call as e.g. given here.

1 Like

Thank you for this. So this will update the .grad feild of those parameters using the gradient that was calculated using autograd,grad() function?

Thank you once again

Yes, that should be the case.

1 Like