What does PIL Images of range [0,1] mean and how do we save images as that format?

I am trying to train a VGG network using custom training data but I am unsure how to save the images to the format required for the PyTorch models. Currently, I just save them normally with values 0-255 using PIL after preprocessing. Should I divide the images by 255.0 before saving them? (Makes the color of image looks different)

result = Image.fromarray((image_to_write).astype(np.uint8))

Is there a different format I should save it as other than jpg and what other steps would I need to do to make sure the model gets the correct input?

(I still normalize the data with mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] in my dataloader)

This is what you’re looking for: http://pytorch.org/docs/torchvision/transforms.html#torchvision.transforms.ToTensor
This converts the image to [0,1], which in turn goes into your network.
So, its fine to have your images saved with pixel values in [0,255].

transforms.Normalize is defined for a tensor (after applying the ToTensor method), so you’re method of normalising the images is correct.