I am trying to train a VGG network using custom training data but I am unsure how to save the images to the format required for the PyTorch models. Currently, I just save them normally with values 0-255 using PIL after preprocessing. Should I divide the images by 255.0 before saving them? (Makes the color of image looks different)
result = Image.fromarray((image_to_write).astype(np.uint8))
result.save("filename.jpg")
Is there a different format I should save it as other than jpg and what other steps would I need to do to make sure the model gets the correct input?
(I still normalize the data with mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
in my dataloader)