I have a doubt related to the function torch.from_numpy. I’m trying to convert a numpy array that contains uint16 and I’m getting the following error:
TypeError: can’t convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.
I suppose one way to solve that is to convert my uint16 numpy array to a numpy array with int32. Is there another way to solve this issue? Am I doing something wrong?
On the other hand, I was wondering how to_tensor method from torchvision is able to transform uint16 images. I checked the source code and I realized that there is a conversion to int16… However, in the official documentation of PIL library specifies that the I;16 mode indicate that the pixel contains an unsigned int16 pixel. Is there a bug on this spcific line? Am I missing something?
I think the answer here depends on the use case.
If you want to be use not to loose any precision, then yes you want to convert it to an int32 before converting to pytorch.
If you don’t want to use double the space, you can use int16 and it should work just fine as long as your numbers are not too big.
Do you encounter any issue with the choice made by torchvision?
In that case, I guess the simplest is going to be make sure your dataset returns Tensors directly. And convert them to int32 before converting them to torch Tensors.