How to use fixed point numbers all the way


I am a new user of pytorch. Now I have trained some network which satisfies me. Now I would like to implement the same network together with the same parameters in some resource limited hardware. I would like to use fixed point number all the time.

I think by default, pytorch uses 32bit floating point numbers in a cuda environment. So the best/safest way to do that is to first change every number in cuda to a fixed point 32bit number. Then run the model again to verify it. I think there is a way to force converting every 32bit floating point number to 16bit floating point number by using “half”. Is there a way to force converting every floating point number to a fixed point number?

Thanks, in advance!

I am trying to use intigers for all the variables in the network just try to check if it works.


This may be helpful to you :
Paper for the above :
It works with PyTorch seamlessly and you can “simulate” fixed point behavior.

Regards, Sumit