I want to ask you how to normalize batch-images again.
After loading cifar10 dataset, I did custom transformation on image,
and I want to normalize image again before passing to the network.
I followed this code (Image normalization in PyTorch - Deep Learning - Deep Learning Course Forums)
and could get mean and std from each channel of image
and I want to normalize [128,3,32,32] transformed image again, and pass to the image
and I don’t know how to apply normalization
(do I have to use for-loop…? )
for batch_idx in range(128):
for channel_idx in range(3):
image = (image-mean[channel_idx])/std[channel_idx]
I wanted to use transforms.Normalize(mean, std) but
I don’t know how to change shape of mean and standard
mean,std shapes are [ , , ] [ , , ] for three channels
and if I pass
transforms.Normalize(mean,std)(data)
then I get error message,
valueError: Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = torch.Size([128, 3, 32, 32]).
in what shape I can pass Normalize function?
or do I have to use other iterative method?
thank you for reading my question.