I have a neural network function x(t) which accepts a scalar t and returns a scalar x. I have 5000 time points t_1, …, t_m and would like to compute the nth order derivatives of x with respect to t at these different time points.
I have looked into potential ways of making torch.autograd.grad accept batches of input points (t_1, …, t_m) but with no success. It seems like this functionality is not implemented but I was hoping there is a better approach than computing the nth derivative with respect to each sample in a for loop.
I feel like is_grads_batched is not doing exactly what I was expecting. Namely, given a neural network x(t), it is computing the derivative at one fixed point x’(t*) and then for a batch of inputs t_1, …, t_m it is computing the product x’(t*) t_1, …, x’(t*) t_m.
What I am trying to do is compute x’(t_1), …, x’(t_m). In other words, I would hope to compute the derivative of the neural network x(t) at a batch of sample points t_1, …, t_m. Is there a batched way to do this in pytorch?