I restored the image using, restore_image = (orig_image - B)/C.
Here C is output of training. The value comes in the range of (-2,10). After that i apply the transformation to get the value between (-1,1) as mentioned. After transformation,C values are in the range tensor([[[[-1.0349e-02, 4.3450e-02, 3.3275e-02, …, 3.0451e-02, etc.
So when i divide (orig_image - B) by C, the output values are tensor([[[[ 4.2118e+02, -1.7778e+01, -1.5082e+02, …, -2.7659e+02, -4.8812e+01, -1.6656e+01] etc.
Again I transformed it in the range of (-1,1) and I got the final value tensor([[[[ 2.9290e-04, -1.2363e-05, -1.0488e-04, …, -1.9235e-04, -3.3945e-05, -1.1583e-05] etc.
This output restored image is constant with some gray level.
Divide by fraction is creating issue. Can you throw some light to solve the issue?