datasets.ImageFolder - Expected all tensors to be on the same device pytorch 1.9

When iterating over the data in the dataloader, I got the RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

My code still runs on an older computer (python 3.7, cuda 10, pytorch 1.4.0), but doesn’t work with my new laptop (python 3.8, cuda 11.1, pytorch 1.9.1).
When debugging, it seems to give a problem in the transforms.Normalize(means, stds) function. the mean and std are tensors on cuda (have been checking it), so it must be the data itself.
I’m loading my data with datasets.ImageFolder(path, transforms=transforms).

Does anyone knows what I should do to fix the error in this newer version?

It’s quite unexpected to see an error in transforms.Normalize as internally the mean and std would be pushed to the same device as the incoming tensor as seen here. So even if you have a device mismatch (mean and std on CUDA while the incoming tensor would be on the CPU), it should still work:

norm = transforms.Normalize(
    mean=torch.randn(3).cuda(), std=torch.randn(3).cuda())

x = torch.randn(1, 3, 24, 24)
out = norm(x)

Are you sure this operation raises the error? Could you post the entire stacktrace, please?

I just found out that this line of code (from the previous version…) was raising the error:
torch.set_default_tensor_type(‘torch.cuda.FloatTensor’)

The problem was indeed not the normalize function itself, thanks!