I want to have a model, full half precision , that mean the weights are on FP16 and the output tensors of every layer are in FP16.
I want to train this model to capture the best parameters on FP16 and used for inference.
I don’t know if it’s possible or supported by Pytorch ?
Yes, PyTorch supports pure FP16 operations and models on the GPU with the known caveats of potential overflows etc. caused by the numerical format which is why we generally recommend using our mixed-precision training util.
1 Like
Thank you for your response. I want to know if the mixed-precision can be used in the training and inference ?
Yes, torch.amp
can be used for both use cases.
1 Like