Vgg_normalised in Neural Style Transfer task

I learned to play with NST and implemented some paper with Pytorch from scratch like Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization Huang+, ICCV2017

My implementation is available from

Check it if you are interested in it. The output in my implementation looks well as the original paper.

However, I have two questions about the official torch implementation.

  1. I don’t understand why used the vgg_normalised model so I just use pre-trained
    vgg19 from torchvision.model.Is it OK?
  2. I also want to know why used the ReflectionPad2D but not normal pad in the upsample layer of Decoder.

I will be very appreciated f someone could help me, thanks.


1 Like