PyTorch Autograds

Hi, everyone! First, I want to say thanks for helping; I got a problem here, in my project, I want to use the autograd of torch in a different way. Simplify my problem is that I want to compute the gradient of output vector with respect to a scalar. For example, in ray tracing, if I ray tracing 4096 rays and I have curvature, which will have impact on all 4096 rays, now I want to compute the gradient of all 4096 rays’ optical path length with respect to curvature; In simplify, the 4096 rays; optical path length is a tensor (4096, ) and curvature is a tensor (1,) I want to compute the gradients, every element in (4096, ) with respect to (1, ). However, I found it’s seems like torch always accept a scalar function not a vectors?

BTW, I know can use torch.autograd.functional.jacobian() to accept a vector input, but in my case my function is very complex (the whole ray tracing process), so I have no idea about how to solve my problem. :dizzy_face:

.grad and .backward also work with non-scalar functions if you explicitly specify a gradient vector, i.e. grad_output

Thanks! .backward with explicitly specify a gradient vector could work with scalar, however, it will get the sum of gradients. I want to get the gradients in a vector form, not the sum of gradients, just like

a = torch.tensor([1.], requires_grad=True)
b = torch.tensor([1., 2., 3.])
c = b * a
c.backward(torch.ones_like(c))
a.grad

What I expect is a tensor like,

[gradient_0, gradient_1, gradient_1]

What I really got was,

tensor([6.])

It seems like the sum of the gradients?
Any further suggesstion? :thinking: :pleading_face: