Hi.

I’m trying to compute the gradient using Autograd.grad function.

I set a and b as follows.

a = Variable(torch.FloatTensor([2]), requires_grad=True)

b = a * torch.FloatTensor([3,4])

and calculated the gradient by

torch.autograd.grad(b, a, create_graph=True, grad_outputs=torch.ones_like(b))[0]

It gives the result tensor([7]).

But as you can see in the equation, the derivative of b w.r.t a should be torch.tensor([3,4]).

Why this grad function automatically sum up the results?

And How can I get the gradient result as vector [3,4]??