Floating point handling in pytorch

Hi Team,

I am searching logic to handled floating point by using some mathematical manipulations in pytorch,
below showing the example that i do manually for simple operation :
#define FP_MUL(X,Y) (((X)*(Y))>>8)
#define FP_DIV(X,Y) (((X)<<8)/(Y))

I need logic to handled deep learning functionality in embedded device where no floating point unit present. Such as how to handle activation function without floating point in hardware?