So Im confuse here.
I am using the tutorial on the pytorch website.
I am reading the images from the cifar10 and for the initial stage im doing some preprocessing on them.
here i just normalize each channel for batch size of 1 between 0-1, but it does not sound like that it is working…
I am not sure if it is a bug or if im doing a mistake here…
So here is my code:
BatchSize=1;
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.0, 0.0, 0.0), (1.0, 1.0, 1.0))
])
TrainSet = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(TrainSet, batch_size=BatchSize,
dataiter = iter(trainloader)
images, labels = dataiter.next()
and here i just wanted to check the values:
print('Images size is: {} and it has min and max of {} and {}, \n\n \
1st channel is {}\n \
min and max of the 1st channel is {} and {} \n \n \
2st channel is {}\n \
min and max of the 2st channel is {} and {} \n \n \
3rd channel is {} \n \
min and max of the 3st channel is {} and {} \n' \
.format(images.size(), torch.min(images[0,:,:,:]), torch.max(images[0,:,:,:]),\
images[0,0,:,:], torch.min(images[0,0,:,:]), torch.max(images[0,0,:,:]),\
images[0,1,:,:], torch.min(images[0,1,:,:]), torch.max(images[0,1,:,:]),\
images[0,2,:,:], torch.min(images[0,2,:,:]), torch.max(images[0,2,:,:]),\
) )
And the results were:
Images size is: torch.Size([1, 3, 32, 32]) and it has min and max of 0.0 and 1.0,
1st channel is
0.1647 0.2078 0.2588 … 0.4353 0.4431 0.4353
0.1569 0.2118 0.2078 … 0.4431 0.4392 0.4353
0.1686 0.1922 0.1961 … 0.4275 0.4392 0.4353
… ⋱ …
0.2824 0.2706 0.2745 … 0.3490 0.3647 0.3765
0.2667 0.2667 0.2706 … 0.3961 0.4000 0.4078
0.2588 0.2588 0.2627 … 0.4118 0.4078 0.4000
[torch.FloatTensor of size 32x32]min and max of the 1st channel is 0.0549019612372 and 1.0 2st channel is
0.1843 0.2314 0.2863 … 0.5216 0.5333 0.5255
0.1804 0.2392 0.2353 … 0.5333 0.5294 0.5216
0.1922 0.2157 0.2235 … 0.5255 0.5294 0.5216
… ⋱ …
0.2902 0.2784 0.2824 … 0.3725 0.3882 0.4000
0.2784 0.2784 0.2824 … 0.4196 0.4235 0.4314
0.2706 0.2706 0.2745 … 0.4353 0.4314 0.4235
[torch.FloatTensor of size 32x32]min and max of the 2st channel is 0.0470588244498 and 1.0 3rd channel is
0.1451 0.1765 0.2235 … 0.7922 0.8039 0.7922
0.1412 0.1843 0.1686 … 0.7765 0.7961 0.8000
0.1529 0.1608 0.1569 … 0.7333 0.7961 0.8078
… ⋱ …
0.3765 0.3686 0.3686 … 0.4745 0.4902 0.5020
0.3529 0.3529 0.3569 … 0.5216 0.5255 0.5333
0.3373 0.3333 0.3412 … 0.5373 0.5333 0.5255
[torch.FloatTensor of size 32x32]min and max of the 3st channel is 0.0 and 0.815686285496
as you can see all three channels are normalized together and have value between 0-1, but the channels separately does not sound like that are normalized.
I thought when I defined it like above it should normalized them between 0-1 separately. Can you please tell me how should i do that if I want each channel separately become normalized.
Thanks