transforms.Normalize Runtime error

Hey Guys,

I want to normalize my Tensor.

When calling Normalize((0.5,0.5,0.5), (0.5,0.5,0.5)), I’m getting following error:

tensor.sub_(mean[:, None, None]).div_(std[:, None, None])
RuntimeError: expected device cpu but got device cuda:0

How do I send normalize to a cuda device? Is this even possible?

Thanks for helping.

Which torchvision version are you using?
mean and std will be created as tensors on the same device as the input as seen in these lines of code and the code works fine:

x = torch.randn(3, 224, 224).cuda()
norm = transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))
out = norm(x)
print(out.device)
> cuda:0

Yeah, found the problem yesterday had torchvision version 0.2, updated it and now it works. I forgot to post the solution, sorry.

Thank you :slight_smile: