Gradient computation for physics equations

Hello.

I’m currently working on material behavior using Saint-Venant-Kirchoff (really well explained here). My network outputs the displacements.

Here is my problem : How can i get the gradient of the displacement (needed to compute the matrix F in the equations)
I tried several things such as:

here ones is defined by 
ones = torch.ones_like(displacement).to(displacement.device)
displacement.backward(ones)

torch.autograd.grad(displacement, ones)  # Same a above

displacement2 = 1.0 * displacement.clone()  # desperately trying to make it a leaf
displacement2.backward(ones)

and many others failed attemps.
It always raise the same problem. The tensor displacement is not a leaf hence, backward returns None.

My questions here are.
Is there a clean way to compute the gradient ?
How can i force my tensor to be a leaf ?

I’m sure it only a single line of code or just a function call but i didn’t find it in the documentation.

Thank you in advance for your answers.
Alban.

I’m no expert, but a leaf node is a Tensor that you explicitly created not as a result of some operation, so like torch.tensor([1.,2.]).

I don’t know what your variable displacement is, or what type of function it is used in, but is it created like torch.tensor(some_value, requires_grad=True)?

Hi,

So you have x = \phi(X) and what you want is dx/dX (the derivative of the \phi function).
So using pytorch word, your output is x and the input wrt to which you want the gradient is X.

So the leaf that will get the gradient is X: X = torch.tensor(current_pos, requires_grad=True) where current_pos is a python number, or a numpy array, or anything else.
Then you want to get the output using only pytorch’s ops (otherwise the autograd won’t work): x = phi_func(X).
If you did this right and all ops in your function phi are differentiable, then you should get an output x that requires gradients.
Now You can backward to get the gradient: x.backward() or autograd.grad(x, X).

Note that autograd is doing automatic differentiation and so is computing a vector Jacobian product. So if your output is not scalar, you will need to provide this vector that will be multiplied with the Jacobian.

Thanks for your replies.

Ok so i have something that looks like your init.

x = torch.FloatTensor(self.inputs).to(device)
x.requires_grad = True

This goes straigth into the network which tries to produce the displacement d = x - X.
So far only pytorch’s ops have been used.
Since we get F = dx/dX = grad(d) + I, I naively though i could use d.backward(one)

The problem might comes from the init since this is the only part that might not match perfectly what you just described.

Note that your call to autograd.grad() in the first post is not correct. You should give the input before the grad_output.

You can do:

X = torch.tensor(self.inputs, dtype=torch.float, device=device, requires_grad=True)

x = phi(X)

d = x - X

F, = autograd.grad(d, X, ones)

Is that what your code looks like?