Style transfer implementation: artefacts in content loss

Hello, as many others I am implementing style transfer from scratch. Since my generated images were not very convincing, I decided to first get the content loss part working, and just try to reconstruct an image from a randomly initialised tensor.

These are some results, and I cannot get rid of those green and violet pixel artefacts.

download (1)

While trying to understand how to solve this, I found this repo which makes super pretty images, and it can reconstruct content images from a random tensor without getting the artefacts I get.

Just to clarify, I tried that code without the modifications they introduce (I used vgg16 instead of vgg19, i kept the maxpool layers and inplace relu, and tried different layers to get the content activations). Nevertheless the reconstructions from that repo seem to not have the same artefacts I get.

I would appreciate any help! This is a google colab with my code:

I get rid of the artefacts by clamping the pastiche tensor one more time after all the processing. I would like to think that this thread is useful to someone else since I struggled with this issue a lot (: