In the forward method my my model, I come across a list of tensors. All of those tensors have requires_grad=True. However, when I try to obtain a tensor from this list using torch.FloatTensor(myTensorList), the requires_grad of the resulting tensor is False, which is breaking the graph for computing gradients. Will setting the requires_grad=True when casting my list to a FloatTensor will keep my graph concrete? Is it safe to do this? If not, what else can I do?
1 Like
Hi,
It is because FloatTensor
creates a brand new Tensor from raw values.
You can use torch.cat(myTensorList)
or torch.stack(myTensorList)
to do this.