Are VGG network trained with [0, 255] or [0.0, 1.0] tensor?

There are two explanations in the example of neural network style transfer.
1: Now we will import the style and content images. The original PIL images have values between 0 and 255, but when transformed into torch tensors, their values are converted to be between 0 and 1. The images also need to be resized to have the same dimensions. An important detail to note is that neural networks from the torch library are trained with tensor values ranging from 0 to 1. If you try to feed the networks with 0 to 255 tensor images, then the activated feature maps will be unable to sense the intended content and style. However, pre-trained networks from the Caffe library are trained with 0 to 255 tensor images.
do you need to change the tensor from [0,1] to [0,255] before feeding to VGG networks?
2:Additionally, VGG networks are trained on images with each channel normalized by mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]. We will use them to normalize the image before sending it into the network.
This shows that the tensor fed into the VGG network is still [0,1].

Here is your answers: https://pytorch.org/docs/stable/torchvision/models.html

[…] The images have to be loaded in to a range of [0, 1] […]