PyTorch tutorial for Neural transfert of artistic style

Did you update to 0.1.10?

@ecolss, also I did the code using python2, I don’t know if it would work with python3.6.

Yes, I did.

@alexis-jacq mentioned, it was a python2 implementation, however I don’t think it is the problem here.

I thought the problem was that, the input is cloned and resized in GramMatrix module, and the style loss is then computed on it, and as the error occurs at the stage of style loss backward(), so would it be that, the grad of style loss over GramMatrix is a 2 dim tensor, and further the grad over the cloned and resized input is not properly computed?

@apaszke any suggestions to debug this?

@apaszke @alexis-jacq

Debug for a while, found the root cause of the error:

Variable.data.resize_() -> Variable.resize().

replaced this line https://github.com/alexis-jacq/Pytorch-Tutorials/blob/master/Neural_Style.py#L82

1 Like

Thanks for having reported this issue.

The code is working on my computer but anyway, I did it quickly when I discovered Pytorch, so I am not surprised if it causes bugs on another system. It is fool of hacks and the implementation is not clean (as you can see here: How to extract features of an image from a trained model). I have to re-write it, I will do it as soon as I have time for myself.

@ecolss Wouldn’t data.view be more appropriate than data.resize in this case? The output tensor has the same size, just a different shape. I think PyTorch’s view is very similar to numpy’s reshape method.

Yes, it’s better to use .view

.view is cool, I just wasn’t aware of it before.
However, .resize also returns a view, doesn’t it? I mean any particular difference between the two?

.view is way way safer than .resize and there are hardly any cases when .resize should be used in user scripts. It will raise an error if you try to get a tensor with a different number of elements, or if it’s not contiguous (.resize can give you a tensor that views on a data that wasn’t used before).

@apaszke Noted, thanks

Hi Alexis, cool work!
I think you can get better results by using LBFGS though. You can check here for an implementation: https://github.com/leongatys/PytorchNeuralStyleTransfer

Best
Leon

2 Likes

Thanks ! I am on my way to try it ! If you don’t mind, I will also borrow your download_model.sh, so I can add VGG usage to the tutorial :innocent:

I’ve been meaning to upload this for awhile. May not be useful anymore since @leongatys has been so generous as to give us the ‘official’ version… :slight_smile:

anyway:
https://github.com/tymokvo/pt-styletransfer

@alexis-jacq
I stole your style transfer code and made some changes. I mainly wanted to use VGG and make the st network importable in other scripts.

Sorry the code is sloppy, I made it in a hurry.

Also, if you haven’t yet, you should try saving every iteration into a GIF. Makes some cool animations.

1 Like

@tymokvo It looks nice ! And thanks for the citation :wink:

I think we should definitely move to Leon’s way to extract features from VGG. As soon I have time (By dint of playing with Pytorch I am getting very late with my PhD…), I will adapt the tutorial based on his idea.

1 Like

Thanks for this implementation, the feature extraction and loss code is pythonic and elegant, learned a lot!

1 Like

@alexis-jacq When you use pretrained vggnet, you should normalize the input image using mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]. However, your code do not normalize the input. I think adding the normalization will makes the results better than now.

See here for details.

1 Like

Thanks for the code. I see you are not clipping the image range in each iteration, but only at the very end, is that correct?

I find this issue still in the pytorch tutorial.

will be fixed in https://github.com/pytorch/tutorials/pull/223

2 Likes