Sorry if this is obvious, but I find the description of the
torch.autograd.backward(variables, grad_variables, retain_variables=False) function quite confusing.
I’m working on a project where I have a vector of variables that I would like to differentiate to find the Jacobian. When it comes to implementing this, I’m not sure what form
grad_variables should be or what a ‘sequence of Tensor’ is. I’ve tried many things, but all throw an error.
Would anyone be able to point me in the direction of an example if one exists? If not, say I had the following super simple example:
x = Variable(torch.FloatTensor([[2,1]]), requires_grad=True) M = Variable(torch.FloatTensor([[1,2],[3,4]])) y = torch.mm(x,M)
What should the arguments for
y.backward() be so that I can find
[[dy1/dx1, dy1/dx2], [dy2/dx1, dy2/dx2]] (i.e. recover M)?