transforms.Normalize(): expected type Double but got type Float

I see this question has been previously asked, but the solutions posted don’t seem to be working for me/the traceback message is different. For some reason my model is failing during an application of transforms.Normalize():

transforms_ =  [transforms.Normalize((0.5,), (0.5,))]
self.transform = transforms.Compose(transforms_)

raw_arr_A = np.load(self.files_A[index % len(self.files_A)])
raw_arr_A_res = raw_arr_A[:,:,0]
tensor_A = torch.unsqueeze(torch.from_numpy(raw_arr_A_res), 0).double()
item_A = self.transform(tensor_A)

Printing the .type() of tensor_A it’s definitely torch.DoubleTensor

Bottom of the traceback: (It starts at a for loop over an enumerate() on my dataloader

File "/Users/user/Documents/env/lib/python3.7/site-packages/torchvision/transforms/functional.py", line 208, in normalize
    tensor.sub_(mean[:, None, None]).div_(std[:, None, None])
RuntimeError: expected backend CPU and dtype Double but got backend CPU and dtype Float

I’ve tried casting my networks to .double() (even though I think that is unrelated). Does transforms.Normalize() not accept Double type tensors or something?

This was a known bug and was recently fixed here.
Since the PR was merged 4 days ago, you might need to build from source.

1 Like