Concatenating Tensors While Maintaining Gradients for Autograd

Essentially, I have a list of tensors of size (3, ) that I want to concatenate into a single large tensor of size (200, 3). However, I don’t want them to be disconnected from the computation graph. A toy example of this would be:

import torch
from torch.autograd import Variable

a = Variable(torch.Tensor([1., 3., 5.]), requires_grad=True)
b = Variable(torch.Tensor([2., 4., 6.]), requires_grad=True)
c = torch.matmul(a.t(), b)

print(torch.autograd.grad(c, a, retain_graph=True)[0])
print(torch.autograd.grad(c, b, retain_graph=True)[0])

z = torch.cat((a, b))
print(torch.autograd.grad(c, z)[0])

Is there anyway to concatenate tensors a, b (and realistically, a list of 200 of such tensors), such that the last line doesn’t error out? (i.e. need to preserve the computation graph)

Would print(torch.autograd.grad(c, (a, b))) just work?

Unrelated to this problem, but Variables are deprecated since PyTorch 0.4. If you are using a newer version, you can use tensors directly.

Hi, your solution only prints out the gradient for the first element in the tuple (i.e. a).

I’m aware of the second note, but I have to use a lot of older code for my research and it’s just becoming a force of habit these days. Thanks for the notice though!