Why does tensor type cast not work sometimes

After tensor.half() called, the tensor’s dtype is not casted to half.
I found some docs as below:
" If the self Tensor already has the correct [ torch.dtype ] and [ torch.device ], then self is returned. Otherwise, the returned tensor is a copy of self with the desired [ torch.dtype ] and [ torch.device ]."
So, what does the “correct [ torch.dtype ] and [ torch.device ]” real mean?

For me the tensor is certainly cast.

tensor([ 0.1776,  0.1658,  0.0118, -0.0336, -0.1261], dtype=torch.float16)

torch.randn(5, device='cuda').half()
tensor([ 0.5459,  0.3481, -1.5215, -0.3481,  0.2035], device='cuda:0',

What the sentence you quote means is that when the input (self) already has the desired type / device, then you are getting the same tensor, not a copy (while when a cast is needed, that necessarily implies copying).

Best regards


Thanks for your relay.

Here is my code:
print("tensors.dtype: ", tensors.dtype) -> tensors.dtype: torch.float32
self.tensors = tensors
print("tensors.dtype: ", tensors.dtype) -> tensors.dtype: torch.float32 , not torch.float16
self.image_sizes = image_sizes
very confused!

Tensor.half is not an inplace operation (on Tensors (as opposed to modules) only functions ending with _ are).
So cast_tensors = tensors.half() or so.

Best regards


1 Like