Construct a sparse tensor while propagating gradient?

I have code similar to this I would like to make faster:

# indices: indices of a 3d tensor
# values associated to the indices


result = torch.zeros((L, N, N))
for idx, (i,j,k) in enumerate(indices):
        mask = torch.zeros_like(result)
        mask[i][j][k] = 1.0
        img = img + mask * values[idx]

Now, even if I chose to make a sparse mask, I notice that each iteration runs slower. Is there a simple solution to a function that will propagate the gradient of the form img = func(indices, values)