Interesting.
I’m not sure what I’m doing wrong:
I have a data loader, which does the following transformation:
def imageNetTransformPIL(size=224):
return transforms.Compose([
transforms.Resize(size),
transforms.CenterCrop(size),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
I get a torch tensor of images (called k
; k.shape = (batch_size, 3, 224, 224)
) from my data loader and display it using the following code:
plt.imshow(k[0].permute(1, 2, 0))
fig = plt.gcf()
fig.set_size_inches(14, 10)
plt.show()
This displays the image as expected.
I then use your code and display the images at various points:
img = transforms.ToPILImage()(k[0])
color_jitter = transforms.ColorJitter(brightness=0, contrast=0, saturation=0, hue=0)
transform = transforms.ColorJitter.get_params(
color_jitter.brightness, color_jitter.contrast, color_jitter.saturation,
color_jitter.hue)
img = transform(img)
plt.imshow(img)
fig = plt.gcf()
fig.set_size_inches(14, 10)
plt.show()
transform_tensor = transforms.ToTensor()
img = transform_tensor(img)
transform_normalize = transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
img = transform_normalize(img)
plt.imshow(img.permute(1, 2, 0))
fig = plt.gcf()
fig.set_size_inches(14, 10)
plt.show()
Surprisingly, even the first image displayed is different (same structure but wildly different colors), suggesting that it is the method by which that I am displaying these images which is different. This led me to try turning the output into a torch tensor and displaying the image again, but this is also a very different image (same structure, very different colors).
Do you have any ideas what I might be doing wrong? Thanks a bunch for your help thus far.