ByteTensor to FloatTensor is slow?

So I when I try to convert the resulting ByteTensor out of to FloatTensor by:
it took around 2.5e-4
while when I check the converting of other types to FloatTensor they all took on the magnitude of e-5, why is this converting from Byte to Float relatively slow? Are there any other ways to do this?
Thanks a lot!

right now there’s no faster way than just typecasting (as you did).

With the latest 0.4 release, you can use the .to function along with device objects to make this much cleaner.

# on top of your script somewhere
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# in your code, use .to(device)
(x_u == 0).to(device, dtype=torch.float32)