Is it a bug that torch.nan
throws an error in int16 ?
a = torch.tensor([1,2,torch.nan]) #works fine
b = torch.tensor([1,2,torch.nan], dtype=torch.int16) #throws error: RuntimeError: value cannot be converted to type int16 without overflow
I don’t see why the torch.nan
functionality wouldn’t be extended to int16.