I learned to play with NST and implemented some paper with Pytorch from scratch like Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization Huang+, ICCV2017
My implementation is available from https://github.com/irasin/Pytorch_Adain_from_scratch.
Check it if you are interested in it. The output in my implementation looks well as the original paper.
However, I have two questions about the official torch implementation.
- I don’t understand why used the vgg_normalised model so I just use pre-trained
vgg19 from torchvision.model.Is it OK?
- I also want to know why used the ReflectionPad2D but not normal pad in the upsample layer of Decoder.
I will be very appreciated f someone could help me, thanks.