Support for reading uint16, uint32 type image

Hi, I am new to libtorch and I’m trying to use it for inferencing. In my case, the input is queried from API as a 1-D pointer image, and then convert it into tensor by torch::from_blob:

int width = 512;
int height = 512;
uint16* pixelData;
some_initialization(pixelData); // do something to load the data.

auto options = c10::TensorOptions().dtype(torch::kInt16);
at::Tensor t = torch::from_blob(pixelData, { width, height }, options);

However the supported types seems to only include kUInt8, kInt8, kInt16, kInt32, kInt64, kFloat32 and kFloat64 according to the document. But sometimes (rarely) I will get images with type of uint16 or uint32.

The value of tensor might be wrong if I give the option a torch::kInt16 type to parse a uint16 type image. However in my case the data is mostly signed int16, so it’s hard for me to test. Just want to know is there a way to correctly parse the input into tensor, and also curious about why there isn’t those types in pytorch.

Any help would be appreciated :slight_smile:

We don’t have plans to support kUInt16 in the short term. What we might want is a conversion utility function. Do you mind opening a feature request on our Github repo for this?

1 Like

I want same…is there optimize way to convert it