Normalisation necessary for image reconstruction?

I have a dataset which needs to be denoised. When I was looking at the images of my dataloader which are fed into network I saw that the images from dataloader and images in my dataset are very different. Now the only transformation I used are resize and ToTensor().

Is it because of ToTensor()? Is it normalizing my image and is that causing my model to learn nothing?

Can I stop this?

Hi,

As you have already figured it out How to input image in a model without normalizing it?.

But AFAIK, normalizing data especially in image processing always improves results. If you look at all pretrained models for different tasks, such as segmentation, classification, etc, all images always have been normalized, so I think the problem is from somewhere else.

Usually normalization improves results, if your model already cannot learn, using unnormalized images also won’t help much.

Bests

See the thing is when I see the image which is going into the model (the normalised one) then as humans we cant make any sense of it. But at the end we need the image which we have in our dataset(unnormalized).

So is there anything I can do to convert the output of model while testing to reverse this normalization effect at the end (while testing)?

This may help:

min = float(img.min())
max = float(img.max())
img.clamp_(min=min, max=max)
img.add_(-min).div_(max - min + 1e-5)

Even a simpler way would be using torchvision.utils.save_image

Thank you so so much!! This worked like a charm!!

You are welcome.
Are we going to close this question too or it is a different issue?

Yeah I’ll close it too!