# Is there a way to improve this precision issue?

Hello while playing around with torch.Transforms (ToTensor & Normalize).
I realized that for some relatively small numbers the output that is Normalized and then denormalized has ‘-1’ difference.
Confirmed that this happens when I add Normalize transform (No error with just ToTensor). Below is the example code to reproduce. For my application ‘-1’ diff causes a big difference…
Would there be a way to improve this situation, please?

``````# Define 'Z' value as numpy
print("Define X ")
X_origin = 16
X = X_origin
X = (np.ones((1,1,3))*X).astype(np.uint8)
print(format(X, '08b'), '\n')
X = Image.fromarray(X)

# To Tensor
print("To Tensor")
transform_list = [transforms.ToTensor()]
t = transforms.Compose(transform_list)
X = t(X)
print(type(X))
print(X.shape)
print(X, '\n')

# Normalize
print("Normalize")
transform_list = [transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]
t = transforms.Compose(transform_list)
X = t(X)
print(X, '\n')

# DeTensorfy & Denormalize
print("DeTensorfy -> Denormalize")
X = X.cpu().float().numpy()
X = (np.transpose(X, (1, 2, 0)) + 1) / 2.0 * 255.0
X= X.astype(np.uint8)
print(X, '\n')
print(format(X, '08b'), '\n')

assert X == X_origin
``````

X hold 15 istead of 16

Ok I was dumb. Will keep the post and answer, in case someone like me appears…

The issue was with following lines :

``````X = (np.transpose(X, (1, 2, 0)) + 1) / 2.0 * 255.0     # 15.9999
X= X.astype(np.uint8)  # Truncates the 0.9999 part..
``````

I should have rounded X before converting to uint8 dtype

``````X = (np.transpose(X, (1, 2, 0)) + 1) / 2.0 * 255.0     # 15.9999
X= np.rint(X)
X= X.astype(np.uint8)  # Truncates the 0.9999 part..
``````
1 Like