Keep the gradient function when changing the data type of tensor

I’m losing the grad fn when changing the data type in a tensor. Is it possible to keep the grad fn when doing this?

e.g. I have a float tensor called my_tensor with a grad fn

new_tensor = my_tensor.type(torch.uint8)

Now the tensor has no grad fn.

Hi, please see the following code -

x = torch.tensor([1.0, 2], requires_grad=True)
x = x+5 # x now has a grad_fn
y = x.type(torch.double)
z = x.type(torch.uint8)

print(y)
print(z)

out -

tensor([6., 7.], dtype=torch.float64, grad_fn=<ToCopyBackward0>)
tensor([6, 7], dtype=torch.uint8)

The explanation is int type tensors cannot require gradients as integral valued functions are not differentiable meaningfully.

And so, the tensor converted to double - y has the grad_fn attribute but z which is an integral tensor has its grad_fn as None .