Consider some algorithm that looks like this:
phi1 = torch.tensor(torch.empty(size=(p, p, m, m)) , dtype = torch.float, requires_grad = True)
phi2 = torch.tensor(torch.empty(size=(p, p, m, m)) , dtype = torch.float, requires_grad = True)
for s in range§ :
for k in range(s):
phi1[s, k] = phi1[s-1, k] - phi1[s, s] @ phi2[s-1, s-k]
phi2[s, k] = phi2[s-1, k] - phi2[s, s] @ phi1[s-1, s-k]
… [bunch of other functions]
lposterior = function of phi1 and phi2 which yields a constant
lposterior.backward()
When I ran this i got a bunch of errors e.g. in-line errors etc. (presumably because I am slicing phi1 and phi2 and this is not allowed). This algo works fine without the ‘lposterior.backward()’ line. What can I do to make autodiff work here?