Sorry if this is an obvious question. I know it will lose precision and accuracy when converting from float32 to float16 but is there a better way than to use tensor.to() function?
I have a 1x3x576x960 tensor that I want to convert from float32 to float16
rs_prev_fp16 = lrs_prev.to(torch.float16)
I’m not sure I understand the question completely, but you are right that transforming a
float32 tensor to
float16 would lose precision.
to() operation is the standard way of converting a
dtype. Alternatively, you could also use
a = b.half() which would have the same effect.
About the accuracy, if your network architecture is
backbone + classification, there may be a better choice that apply f16 only in
backbone while training.
PS: sometimes, f16 may lead to
NaN loss if there are BN layers in your network.