What is the most simple way to reobtain a PIL image after normalization?

A typical way to load image data is to:

  1. Load the image from disk as a PIL Image with shape [C, W, H] and data of type uint8
  2. convert it to type float/double and map it to values between 0…1
  3. normalize it according to some mean and std. Generally, this means that the tensor contains negative values.

I’d like to visualize the normalized image. I could and would like to use the ToPILImage transformation.
However, AFAIK, this function requires the input tensor to be of type uint8.

Is there an ‘easy’ way to do this? or do I have to adjust the channels individually using its min/max?

Thanks

Hi,

I assume you have a tensor of image that is normalized and called img. This snippet will unnormalize image given custom mean and std.

img = img.numpy().transpose((1, 2, 0))  # numpy is [h, w, c] 
mean = np.array([0.4451, 0.4262, 0.3959])  # mean of your dataset
std = np.array([0.2411, 0.2403, 0.2466])])  # std of your dataset
img = std * img + mean
img = np.clip(img, 0, 1)
pil_image = Image.fromarray(img)

But for using ToPILImage, you need to remove first line and assume image as tensor and instead of using numpy method, use torch.. Finally, instead of last line, ToPILImage()(img) will do the trick.

Bests

Thank you for the response but that’s not quite what I’m looking for.

I want to keep the normalization. To do that I have to somehow get back to value ranges that PIL can handle.

I think that I could do it like so:

# load image + normalize
img = Image.open(img_loc)
x_tensor = torch_transforms.ToTensor()(img)
x_tensor = x_tensor / 255
x_tensor = torch_transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])(x_tensor)

# save normalized image
a = x_tensor[0,:,:].min()
b = x_tensor[1,:,:].min()
c = x_tensor[2,:,:].min()
a2 = x_tensor[0,:,:].max()
b2 = x_tensor[1,:,:].max()
c2 = x_tensor[2,:,:].max()
x_tensor[0] = ((x_tensor[0]-a) / (a2 - a)) * 255
x_tensor[1] = ((x_tensor[1]-b) / (b2 - b)) * 255
x_tensor[2] = ((x_tensor[2]-c) / (c2 - c)) * 255
img2 = transforms.ToPILImage()(x_tensor.type(torch.uint8))

Basically, I ‘lift’ each channel to a 0-1 range and map it to the 0-255 values.

However, the resulting image doesn’t look correct. It should look more distorted. I probably have some rather basic error, which is why I was wondering if there is a simpler way of doing the transformation.

Sorry, I did not understand your question correctly.
Unfortunately, I do not know why the image normalized using custom mean and std has same statistics and low MSE error when it has been reconstructed using MinMax scaler as logically, mean and std are different.

but your code can be simplified,

  1. ToTensor scales input images in range of [0, 1], so you no longer need to
  1. The process of finding min and max of each channel than unnormalizing can be simplified to this:
min_i = x_tensor.min(dim=(1), keepdim=True).values.min(dim=(2), keepdim=True).values
max_i = x_tensor.max(dim=(1), keepdim=True).values.max(dim=(2), keepdim=True).values
x_tensor = ((x_tensor-min_i) / (max_i - min_i)) * 255
img2 = transforms.ToPILImage()(x_tensor.type(torch.uint8))

I need to do some reviewing/experiments to get what is happening and why there is no huge difference as you have mentioned.