Will tensor.to float32 to float16 lose precision or values?

Sorry if this is an obvious question. I know it will lose precision and accuracy when converting from float32 to float16 but is there a better way than to use tensor.to() function?

I have a 1x3x576x960 tensor that I want to convert from float32 to float16

I used

lrs_prev_fp16 = lrs_prev.to(torch.float16)

Iā€™m not sure I understand the question completely, but you are right that transforming a float32 tensor to float16 would lose precision.
The to() operation is the standard way of converting a dtype. Alternatively, you could also use a = b.half() which would have the same effect.

1 Like

About the accuracy, if your network architecture is backbone + classification, there may be a better choice that apply f16 only in backbone while training.

PS: sometimes, f16 may lead to NaN loss if there are BN layers in your network.