Obtain gradients between two variables

Dear PyTorch dev,

I am a professor in one of the US Universities working on data-driven scientific computing using PyTorch now. The ability to get gradients allows for some amazing new scientific computing algorithms. You can see from this paper, and this github repo github link (e.g., starting on line 121, “u = tf.gradients(psi, y)”), the ability to get gradients between two variables is in Tensorflow and is becoming one of the major differentiator between platforms in scientific computing. This paper is published in 2019 and has gained 168 citations, very high in the realm of scientific computing. I see many people are following suite and using this algorithm. We are now using PyTorch but might be forced to migrate to TF if such functionality is not available with PyTorch. Really hoping it could be released…

You can do this in PyTorch! Make sure requires_grad = True on x and y when you initially define them (you could directly set requires_grad = True on the variables) and then use torch.autograd.grad to compute a gradient with respect to those variables.

huh!! I didn’t know it! Gonna look into this now. Thanks!