Data Loss when converting images from np.array to PIL Image

I want to use a CNN for classification of galaxies. For this, I intended to use the torchvision.transforms module to perform some data augmentation. However, for this purpose there are PIL Images required. So I have to convert my float type images to uint8 and hence I have lose some data. Can I do something to prevent this or is this not from relevance anyways?

Hi,

There is a mode='F' in PIL which enables 32bit floating point images. Have you tried it?

Also, if you really need to use original images (numpy array possibly), you can use Kornia after converting images to tensors. It works directly on tensors and supports many different transforms.

Bests

Thank you!! I figured out I could also just change the original image from float64 to float32 and then I didn’t need to specify a mode at all.

Ow, I did not know that images are in float64. Yes, if you convert to .float32, PIL will automatically use mode='F'.

Good luck