Hello fellows,
I do have an issue to properly display images with CIFAR10 and PIL.
My dataset is loaded using torchvision.datasets as follow (mean and std values are from the CIFAR 10 standard mean and std values):
dataset=datasets.CIFAR10('CIFAR10', train=False,
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
]))
Later in my code, I want to unnormalize some image to save them on disk and visualize them later:
I thus follow the definition of transforms.Normalize
at https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize to obtain my original images:
def convert_cifar10(t,pil):
"""Function to convert a cifar10 image tensor (already normalized)
onto a plotable image.
:param t: image tensor of size (3,32,23)
:type t: torch.Tensor
:param pil: output is of size (3,32,32) if True, else (32,32,3)
:type pil: bool
"""
im = t.detach().cpu()
# approximate unnormalization
im[0] = im[0]*0.229 + 0.485
im[1] = im[1]*0.224 + 0.456
im[2] = im[2]*0.225 + 0.406
if not pil:
im = im.numpy()
im = np.transpose(im,(1,2,0))
return im
The minimal example is the following:
dataiter = iter(dataset)
data, label = dataiter.next()
original_img = convert_cifar10(data[0],pil=False)
plt.imshow(original_img)
plt.show()
second = convert_cifar10(data[0],pil=False)
plt.imshow(second)
plt.show()
The first plot displays something normal (cannot diplay it as a new user, but they are for sure the original dataset images).
The second one is much less coloured:
I may have applied twice the transformation on the input tensor, but the documentation explicitly states that the operation is made out of place:
This transform acts out of place, i.e., it does not mutates the input tensor.
Is there anything I have misunderstood?
Thank you in advance