How can i get seperate gradient instead of a summed one

Dear all,

I’m trying to use autograd.grad to calculate partial derivatives. However, I found that the grad results are summed up results, but I want seperate ones.
The code is as follows:

def df4_dx(p, c):
dist_grad = torch.autograd.grad(outputs=dist, inputs=[p], grad_outputs=torch.ones_like(dist), create_graph=True)
print("dist_grad xxxxxxx= ", dist_grad)

p2 = torch.tensor([[1.0, 2.0], [1.0, 4.0], [1.0, 6.0], [1.0, 8.0]], requires_grad=True)
ci2 = torch.tensor([[0, 0], [0, 1], [1,2]])

z4 = df4_dx(p2, ci2)
dfdx, dfdy = torch.zeros(p2.shape[0], ci2.shape[0]), torch.zeros(p2.shape[0], ci2.shape[0])
print("z4 = ", z4)

Here I calculate the distance between pi and ci, there are 2 ci so I’m expecting [dfdx at c0, dfdx at c1], however I get [dfdx at c0 + dfdx at c1]. I can get the seperate resutls if i do the autograd for dist[:, 0] and dist[:, 1] which acts like a for-loop. May I ask is there any better way to achieve this? Thanks.

Hi @Yuhan_PING, You might want to have a look at the torch.func package, as that allows for per-sample gradients instead of summed gradients via the use torch torch.func.vmap