Yes it is. This should work:
L = Variable(torch.Tensor(i_size, j_size))
# it's important to not specify requires_grad=True
# it makes sense - you don't need grad w.r.t. original L content,
# because it will be overwritten.
for i in range(i_size):
for j in range(j_size):
L[i, j] = # compute the value here
But beware, it might be very very slow! Not only because you’ll be looping over elements in Python, but also because it will involve a lot of autograd ops to compute this, and there’s a constant overhead associated with each one. It’s not a huge problem if you’re doing relatively expensive computation like matrix multiplication or convolution, but for simple ops it can be more expensive than the computation alone.
In the vast majority of cases it is possible to rewrite the equations so that you don’t have to compute the individual elements in the loop, but you can use only a few matrix-matrix operations that achieve the same thing, but will compute the results in C using highly optimized routines. For examples you can look at how @fmassa rewrote the loss function in another thread.