How to avoid inline errors when doing autodiff with loops?

Consider some algorithm that looks like this:

phi1 = torch.tensor(torch.empty(size=(p, p, m, m)) , dtype = torch.float, requires_grad = True)
phi2 = torch.tensor(torch.empty(size=(p, p, m, m)) , dtype = torch.float, requires_grad = True)

for s in range§ :
for k in range(s):
phi1[s, k] = phi1[s-1, k] - phi1[s, s] @ phi2[s-1, s-k]
phi2[s, k] = phi2[s-1, k] - phi2[s, s] @ phi1[s-1, s-k]

… [bunch of other functions]

lposterior = function of phi1 and phi2 which yields a constant
lposterior.backward()

When I ran this i got a bunch of errors e.g. in-line errors etc. (presumably because I am slicing phi1 and phi2 and this is not allowed). This algo works fine without the ‘lposterior.backward()’ line. What can I do to make autodiff work here?

I think (just hypothesising) this is due to the torch.empty tensors. You need to initialise the tensors in order for you to have grads for the .backward() method.

You could initialise the phis with random values and then check to see if the algorithm works and indeed is capable of autograd.