I’m using torchvision.transforms to normalize my images before sending them to a pre trained vgg19.
Therefore I have the following:
normalize = transforms.Normalize(mean = [ 0.485, 0.456, 0.406 ], std = [ 0.229, 0.224, 0.225 ])
My process is generative and I get an image back from it but, in order to visualize, I’d like to “un-normalize” it.
Is there a simple way, in the API, to inverse the normalize transform ?
Or should it be coded by hand ?
Also I’m a bit surprise that the process works really fine without any normalization step.
The whole thing is about style transfer, from this paper: https://arxiv.org/abs/1508.06576, and there’s a nice pytorch implementation outhere (not mine) here: https://github.com/alexis-jacq/Pytorch-Tutorials.
That implementation doesn’t normalize anything before feeding images to vgg19 and the results are OK.
Basically vgg19 is used to extract features from feeded images.
Your thoughts on why it stills works ?