How to retain leaf node status of a tensor when using .to(memory_format=torch.channels_last)

I’m trying to create a random tensor with memory_format=torch.channels_last. I’m doing it as follows:

_X = torch.rand(3, 3, 70, 70, requires_grad=True, dtype=torch.float32, pin_memory=True)
X = _X.to(memory_format=torch.channels_last)

But I still get the The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. warning and the X.grad is None after the backward pass. If I remove the 2nd line (and of course replace _X with X in the 1st one) then everything works as expected.

It’s same if I switch _X and X in the code above.

You can .detach() the tensor and call .requires_grad_() on the detached version to create a new leaf tensor:

_X = torch.rand(3, 3, 70, 70, requires_grad=True, dtype=torch.float32, pin_memory=True)
X = _X.to(memory_format=torch.channels_last).detach()
X.requires_grad_()
print(X.is_leaf)
# True
1 Like