Float64 is not supported when resizing ndarray

I am dealing with a dataset that is being loaded as a numpy.ndarray datatype. I want to resize the loaded images from 1X190x200 to 1X256X256 using transforms.Resize(256).

The problem is that this transform only accepts PIL images and not ndarray. I have tried using “functional.to_pil_image( image )”, but I get the following error:

Input type float64 is not supported

(this error appears even when the I duplicate the channels so that the input image is 3x190x200).

Using np.uint8(image) completely destroys the image, and I don’t want to change the datatype to float32.

Is there any way to get around this problem and to resize an ndarray?



Why don’t you want to use float32? Pytorch’s FloatTensor (the default tensor type) are actually float32.

I want to leave the image resolution as it is. Is there any way to do it?

float or double change the precision of the value of each pixel. It has nothing to do with image resolution. Also most (all actually) have weights in float32 so you don’t loose much precision.