By default PyTorch will initialize all tensors and parameters with “single precision”, i.e. float32
.
If you are not using the mixed precision training utilities or are calling .half()
, to(torch.float16)
etc. in your code, the model should stay in FP32.
To check it, you could iterate all parameters (and buffers) and print their dtype
:
for param in model.parameters():
print(param.dtype)