PyTorch gradient computing of multidim tensor

Hello, this question has probably already been asked, I just couldn’t find it. If so, could you kindly redirect me to that thread? Thank you.

Is there an easier way of computing gradients of a matrix in PyTorch?

import torch

x = torch.linspace(0, 1, 50, requires_grad=True).view(-1, 1)
inputs = torch.hstack([torch.cos(w*x) for w in range(5)])

d_inputs = torch.hstack([grad(f.sum(), x, create_graph=True, retain_graph=True)[0] for f in inputs.T])

Thank you.