Weight parameters with 8, 14 bit precisions?

Hi, I need to train a small classifier on MNIST using, during training, weights of different precisions (more in detail, 8bit and 14bit). Is this possible to do in Pytorch in an easy way? That is, to simply take the code of an existing classifier online (from the many tutorials), and change a couple of lines to obtain so?

https://pytorch.org/tutorials/recipes/quantization.html

This is a nice gist as well: Quantisation example in PyTorch ยท GitHub

Thanks! However, this does not allow to train the model on the precisions I need. It is for inference only. Are you aware of any other ways of doing this?

Is quantisation aware training something different from what you want to achieve? Or Do you want quantisation in the backward pass too? I imagine this is not an easy feat.