Define neural network weights as torch.float16 dtype

I wonder if it is possible to define part of weights in one network as torch.float16 data type. How to back propagate gradient smoothly along this kind of model?
Thank you

Yes, it’s possible to use just parts of your model in FP16. Autograd will take care of transforming all gradients to the necessary dtype.
Also, if you are using cuDNN, some layers also take mixed precision input, so that you don’t have to specifically cast them to FP16 or FP32.
Have a look at NVIDIA’s apex for some mixed precision utilities as well as this post for more information regarding apex.