Torch.nan not supported in int16

Is it a bug that torch.nan throws an error in int16 ?

a = torch.tensor([1,2,torch.nan]) #works fine 
b = torch.tensor([1,2,torch.nan], dtype=torch.int16) #throws error: RuntimeError: value cannot be converted to type int16 without overflow

I don’t see why the torch.nan functionality wouldn’t be extended to int16.


nan a floating point concept. none of the integer dtype define nans as they all represent only valid integers in a given range.