How to use VGG-19 network to estimate perceptual loss?

I want to use the pre-trained model vgg19 in torchvision.models.vgg to extract features of ground truth and estimate results from the conv1_1,conv2_1,conv3_1,pool1,pool2. Just like the perceptual loss in the neural style transfer.
I have trouble in using pre-trained model to get the feature maps.

You can use the example of fast-neural-style located in pytorch examples repo.

https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py