How to use torch.autograd.backward for a matrix?

Hello, I want to use .backward() to calculate grad w.r.t each input of my matrix X_ij, however , it is huge and iterating over each position specifying grad argument in .autograd(grad=) takes way to much time. I have not yet found a better way rather then iterate through columns i and rows j but it takes nested for-loop like so:

output.shape # [60,120]
elemet_wise_grads = []
for i in range(60):
    for j in range(120):
        external_grad = torch.zeros_like(output)
        external_grad[i,j] = 1
        output.backward(gradient=external_grad, retain_graph=True)
        gradient = embeddings.grad
        elemet_wise_grads.append(gradient)

is there an efficient way to do that?
thank you