Quick show and tell will be the easiest way to tell you the problem.
# Define the dataset and the loader as usual
dataset = dset.ImageFolder(root=datapath,
transforms.Normalize((0.5414, 0.5333, 0.5338),(0.1897, 0.1911, 0.1932))]))
dataloader = torch.utils.data.DataLoader(dataset, batch_size=64,
#load the image
data = next(iter(dataloader))
#save the image
dlX = data.to(device)
Result of save_image
Am I missing something? or doing something wrong?
I have trained a DCGAN with this dataset with results look like this but can only post a single image)
I am attaching a link to my dropbox where you can see the result, original, and the training results
Take a look at torchvision.transforms.Normalize. Your picture is being normalized, which results in this weird picture. If you remove
transforms.Normalize((0.5414, 0.5333, 0.5338),(0.1897, 0.1911, 0.1932))], the image should look normal.
Thank you for the reply.
As far I as I understand normalizing, I thought it was crucial that I normalize the dataset accordlingly to it’s avg and std, thus those are the calculated value.
Then, to get results more resembling the original, is it okay to skip normalization?
Yes, normalization should be done. I wouldn’t skip it, as normalizing data improves performance.
If you want the original image back and you know the mean and std, you could simply reverse the normalization:
normalized_image = (original_image - mean) / std
original_image = (normalized_image * std) + mean
Thank you for the reply, I had not been able to come back to this issue for a bit while.
I will check within next week to see if it work, which I highly think would.
Thanks for your time