Hello while playing around with torch.Transforms (ToTensor & Normalize).

I realized that for some relatively small numbers the output that is Normalized and then denormalized has ‘-1’ difference.

Confirmed that this happens when I add Normalize transform (No error with just ToTensor). Below is the example code to reproduce. For my application ‘-1’ diff causes a big difference…

Would there be a way to improve this situation, please?

```
# Define 'Z' value as numpy
print("Define X ")
X_origin = 16
X = X_origin
X = (np.ones((1,1,3))*X).astype(np.uint8)
print(format(X[0][0][0], '08b'), '\n')
X = Image.fromarray(X)
# To Tensor
print("To Tensor")
transform_list = [transforms.ToTensor()]
t = transforms.Compose(transform_list)
X = t(X)
print(type(X))
print(X.shape)
print(X, '\n')
# Normalize
print("Normalize")
transform_list = [transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]
t = transforms.Compose(transform_list)
X = t(X)
print(X, '\n')
# DeTensorfy & Denormalize
print("DeTensorfy -> Denormalize")
X = X.cpu().float().numpy()
X = (np.transpose(X, (1, 2, 0)) + 1) / 2.0 * 255.0
X= X.astype(np.uint8)
print(X, '\n')
print(format(X[0][0][0], '08b'), '\n')
assert X[0][0][0] == X_origin
```

X hold 15 istead of 16