@ptrblck
I got one more weird issue:
# load binary image 64X64 with 0 and 255 value in the image.
self.data_transforms = transforms.Compose([
transforms.Resize((28, 28)),
transforms.Normalize([0.485,], [0.229])
])
image= torch.unsqueeze(torch.from_numpy(image).type(torch.float), 0)
image = self.data_transforms(image)
after the above code, the normalized data in image is very big, like:
[[[ 458.0684, 713.7275, 713.7274, 713.7275, 713.7275, 1111.4192, 969.3862, -2.1179, -2.1179, -2.1179, -2.1179, -2.1179, 742.1346, 1111.4192, 1111.4192, 1111.4192, 1111.4192, 1111.4192, 1111.4192, 367.1666, -2.1179, -2.1179, -2.1179, -2.1179, 554.6549, 1111.4192, 1111.4192, 1111.4192], ......
Based the data normalized by the above code, the training accuracy keeps 10% unchanged during the training.
If I remove the transforms.Normalize, the training accuracy could be 98.50%.
The above 2 experiments prove that my code about how to use transforms.Normalize is not correct.
I would like to make it work, which may make the accuracy better.