joels
(Joel Simon)
September 12, 2017, 2:24am
#3
Hey
A way to reverse the normalization does not seem to exist. However it is pretty straightforward to create a simple class that does so.
class UnNormalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, tensor):
"""
Args:
tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
Returns:
Tensor: Normalized image.
"""
for t, m, s in zip(tensor, self.mean, self.std):
t.mul_(s).add_(m)
# The normalize code -> t.sub_(m).div_(s)
return tensor
You instantiate it with the same arguments used for the normalize. and then use it the same way
unorm = UnNormalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
unorm(tensor)
53 Likes
svd3
(Saurabh)
March 22, 2018, 9:33pm
#4
Most easiest way would be:
invTrans = transforms.Compose([ transforms.Normalize(mean = [ 0., 0., 0. ],
std = [ 1/0.229, 1/0.224, 1/0.225 ]),
transforms.Normalize(mean = [ -0.485, -0.456, -0.406 ],
std = [ 1., 1., 1. ]),
])
inv_tensor = invTrans(inp_tensor)
24 Likes
mjust.lkc
(Kaican Li)
May 19, 2018, 9:20am
#5
A more concise approach based on Saurabh’s answer:
inv_normalize = transforms.Normalize(
mean=[-0.485/0.229, -0.456/0.224, -0.406/0.255],
std=[1/0.229, 1/0.224, 1/0.255]
)
inv_tensor = inv_normalize(tensor)
7 Likes
svd3
(Saurabh)
August 8, 2018, 1:25am
#6
You’ve adjusted the mean wrongly
the mean should be
mean=[-0.485 * 0.229, -0.456 * 0.224, -0.406 * 0.255]
because the renormalization needs the answer to be (X/sigma - mu) = (X - mu*sigma)/sigma
mjust.lkc
(Kaican Li)
August 8, 2018, 5:59pm
#7
The inverse normalization should be
x = z*sigma + mean
= (z + mean/sigma) * sigma
= (z - (-mean/sigma)) / (1/sigma),
since the normalization process is actually z = (x - mean) / sigma if you look carefully at the documentation of transforms.Normalize
. I tested my original code and it worked fine.
7 Likes
dizcza
(Danylo Ulianych)
October 10, 2018, 4:48pm
#8
Keep in mind, torchvision.transforms.Normalize
's operations are in-place . If you want immutable implementation,
class NormalizeInverse(torchvision.transforms.Normalize):
"""
Undoes the normalization and returns the reconstructed images in the input domain.
"""
def __init__(self, mean, std):
mean = torch.as_tensor(mean)
std = torch.as_tensor(std)
std_inv = 1 / (std + 1e-7)
mean_inv = -mean * std_inv
super().__init__(mean=mean_inv, std=std_inv)
def __call__(self, tensor):
return super().__call__(tensor.clone())
9 Likes
Thank you.
But there is a typo in the third element of mean.
It’s 0.225 not 0.255
inv_normalize = transforms.Normalize(
mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
std=[1/0.229, 1/0.224, 1/0.255]
)
inv_tensor = inv_normalize(tensor)
2 Likes
nofreewill
(Jonatán Iván)
February 5, 2020, 7:52pm
#11
There is still a typo in the std
inv_normalize = transforms.Normalize(
mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
std=[1/0.229, 1/0.224, 1/0.225]
)
inv_tensor = inv_normalize(tensor)
2 Likes
edgarriba
(Edgar Riba)
February 5, 2020, 8:18pm
#12
1 Like
Ksalomon
(Salomon Kabongo KABENAMUALU)
April 29, 2020, 12:44am
#13
tsterin:
normalize = transforms.Normalize(mean = [ 0.485, 0.456, 0.406 ], std = [ 0.229, 0.224, 0.225 ])
A slightly modified version of the above code
This suppose a defined mean and std.
inv_normalize = transforms.Normalize(
mean= [-m/s for m, s in zip(mean, std)],
std= [1/s for s in std]
)
inv_tensor = inv_normalize(tensor)
1 Like
Sorry to bother, but this doesn’t seem to work for me. As inv_normalize(tensor)
will raise a TypeError
complaining tensor is not a torch image.
Is this a version issue? Thanks in advance!
Could you check the shape and type of the input you are passing to this method?
I guess the number of dimensions of type might be unexpected and thus Normalize
might raise this issue.
1 Like
yptheangel
(Choo Wilson)
October 16, 2020, 1:45pm
#16
following @joels method, it worked for me.
def inverse_normalize(tensor, mean, std):
for t, m, s in zip(tensor, mean, std):
t.mul_(s).add_(m)
return tensor
input_tensor= inverse_normalize(tensor=input_tensor, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
2 Likes
yptheangel
(Choo Wilson)
November 7, 2020, 6:36am
#17
I found out that even after performing the inverse normalization, the inverse normalized opencv imag is not equivalent to an original opencv via cv2.imread
. I am sticking to getting a mask from pytorch model output and multiplying it with original frame via cv2.imread
hustzwen
(Hustzwen)
July 2, 2021, 4:43am
#18
like this,
def inverse_normalize(tensor, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)):
mean = torch.as_tensor(mean, dtype=tensor.dtype, device=tensor.device)
std = torch.as_tensor(std, dtype=tensor.dtype, device=tensor.device)
if mean.ndim == 1:
mean = mean.view(-1, 1, 1)
if std.ndim == 1:
std = std.view(-1, 1, 1)
tensor.mul_(std).add_(mean)
return tensor
3 Likes
Apogentus
(Apogentus)
July 5, 2021, 10:55am
#19
Works like charm. Normalization+unnormalization give original image.
Giorgio
(Giorgio)
September 30, 2021, 11:04am
#20
this code seems simpler
If you want to reverse the normalization, all you need to do is to use a new normalization, with slight modifications:
mean = torch.tensor([0.4915, 0.4823, 0.4468])
std = torch.tensor([0.2470, 0.2435, 0.2616])
normalize = transforms.Normalize(mean.tolist(), std.tolist())
unnormalize = transforms.Normalize((-mean / std).tolist(), (1.0 / std).tolist())
img_unn = unnormalize(img)
plt.imshow(img_unn.permute(1, 2, 0))
plt.show()
Anyway I hate the fact that we don’t have un-normalize (de-normalize), it would be quite helpful for regression neural networks too.
opened 01:37PM - 05 Jun 18 UTC
closed 04:36PM - 06 Jun 18 UTC
Basically the inverse of `transforms.Normalize` as this will allow us to visuali… ze tensors during training more easily.
Here is some working code as this is still one of the first Google results in inverse normalization. It follows torchvision.transforms convention so you can compose it.
Code Gist
class UnNormalize(torchvision.transforms.Normalize):
def __init__(self,mean,std,*args,**kwargs):
new_mean = [-m/s for m,s in zip(mean,std)]
new_std = [1/s for s in std]
super().__init__(new_mean, new_std, *args, **kwargs)
yes im hving this issue too.
Any method that can fix it?
deksa89
(Dean)
March 30, 2022, 5:28pm
#23
hi, this is my solution to that problem
#normalize images
image_transforms = Compose([
Resize((256,256)),
ToTensor(),
Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
#unnormalize images
def imshow(image):
npimg = image.numpy()
npimg = np.transpose(npimg, (1,2,0))
npimg = ((npimg * [0.229, 0.224, 0.225]) + [0.485, 0.456, 0.406])
plt.imshow(npimg, interpolation=‘nearest’)