I have a doubt related to the function torch.from_numpy. I’m trying to convert a numpy array that contains uint16 and I’m getting the following error:
TypeError: can’t convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.
I suppose one way to solve that is to convert my uint16 numpy array to a numpy array with int32. Is there another way to solve this issue? Am I doing something wrong?
On the other hand, I was wondering how to_tensor method from torchvision is able to transform uint16 images. I checked the source code and I realized that there is a conversion to int16… However, in the official documentation of PIL library specifies that the I;16 mode indicate that the pixel contains an unsigned int16 pixel. Is there a bug on this spcific line? Am I missing something?
I think the answer here depends on the use case.
If you want to be use not to loose any precision, then yes you want to convert it to an int32 before converting to pytorch.
If you don’t want to use double the space, you can use int16 and it should work just fine as long as your numbers are not too big.
Do you encounter any issue with the choice made by torchvision?
Yes, it my case my numbers are big and I need all the space.
In that case, I guess the simplest is going to be make sure your dataset returns Tensors directly. And convert them to int32 before converting them to torch Tensors.
I’m going to do that. Thanks. However, maybe It will be a good idea to document this decision about how it is managed uint16 by ToTensor() method.
cc @fmassa what do you think?
i have to change uint16 to int64 but i have range values of 0 -9 for y varaible inj neural network
tell me solution for this please
I have uint16 target image between 0 and 9 values how to use in pytorch dataloader