Some questions about the style-transfer implementaton

I want to implement a style-transfer demo use a pertained VGG model.So how can I use the torchvision.model package which provides the pertained model for me and obtain the output of each activation layers (to compute the style-error)?Meantime how can I define the optimizer to optimize the Generated image in order to minimize the loss?

This link should be helpful.

Thank you so much ! I just discovered that there were so many good resources on the website you provided.