How to use torch.cat and preserve the grad history unchanged?

Simply, I have a few variables, say x1, x2, x3, with requires_grad set to True. Using torch.Tensor resulted in removing all the grad history,

ipdb> x1
tensor(0.6684, device='cuda:0', grad_fn=<L1LossBackward>)

ipdb> x2
tensor(0.7662, device='cuda:0', grad_fn=<L1LossBackward>)

ipdb> torch.Tensor((x1, x2))
tensor([0.6684, 0.7662])

I need to concatenate the variables into one tensor and preserve the grad history, so that I can use backward pass later.

I have tried using torch.Tensor( (x1, x2, x3) ) but all the grad history has been lost.
Using torch.cat((x1,x2,x3)) did not work too, throwing the error message “RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated” .

One way around this, is to define the tensor
z = torch.ones(3, dtype=torch.float32, requires_grad=True).to(device)

then, multiply z by x

z[0] = z[0]*x1
z[1] = z[1]*x2
z[2] = z[2]*x3

Any, ideas, alternatives?

1 Like

Could you try to use torch.stack instead?

1 Like

What about explicitly setting the flag?

x = torch.tensor(1., requires_grad=True)
torch.tensor([x**3, 2*x], requires_grad=True)

Edit: sorry, that won’t work, as it will create new variables (one in the position of x***3 and one in the position of 2*x) with their own grad history.