Essentially, I have a list of tensors of size (3, ) that I want to concatenate into a single large tensor of size (200, 3). However, I don’t want them to be disconnected from the computation graph. A toy example of this would be:
import torch
from torch.autograd import Variable
a = Variable(torch.Tensor([1., 3., 5.]), requires_grad=True)
b = Variable(torch.Tensor([2., 4., 6.]), requires_grad=True)
c = torch.matmul(a.t(), b)
print(torch.autograd.grad(c, a, retain_graph=True)[0])
print(torch.autograd.grad(c, b, retain_graph=True)[0])
z = torch.cat((a, b))
print(torch.autograd.grad(c, z)[0])
Is there anyway to concatenate tensors a, b (and realistically, a list of 200 of such tensors), such that the last line doesn’t error out? (i.e. need to preserve the computation graph)