Is there any difference between to(), type() and type_as()? I don’t know why there is such need to have three implementations, especially what is the difference between to() and type(). It seems that xx.to(x.type())
will report error, while xx.to(torch.float)
is correct. It seems to be better to combine them, or use to() only for changing device.
type()
is an older method that works only with the tensor types (e.g. DoubleTensor, HalfTensor, etc.), whereas to()
was added in 0.4 along with the introduction of dtypes (e.g. torch.float, torch.half) and devices, which are more flexible.
to()
is the preferred method now.
2 Likes
Thanks for your help @RicCu . So for now, dtype is also preferred than type, right? And I need to use x.to(y.dtype) instead of x.type_as(y) or x.type(y.type()).
Yes, dtype is preferred
You can actually do x = x.to(y)
, which returns a copy of x
with the dtype and the device of y
. If you only want to change the tensor’s dtype or device, but not both, then yeah, you woud do it with to(y.dtype)
1 Like
Thanks for your help.