PyTorch runtime error "expected all tensors to be on the same device" when all tensors are on the same device

I’m working on an inference script to quickly make predictions and get useful statistics. I decided it would be a good idea to get the predictions in batches and use CUDA to make the process much faster (inference on cpu takes more that 5 seconds so that’s not an option).

When I use the CPU to run my model, there is no problem, I get the output and no errors are thrown. However, when I transfer the model and its inputs to CUDA, I get the message that tensors are on different devices, which makes no sense to me.

device = torch.device('cuda:0')
sample = next(iter(dataloader))

img, _ = sample
output = model(img.unsqueeze(0))
# The line above works fine

model = model.to(device)
img_cuda = img.to(device)
output = model(img_cuda.unsqueeze(0))
# The line above causes the error

I have a feeling, something is not right with transfering the model to CUDA.

Are you explicitly handling movement of data inside model with .to(), .cuda(), .cpu() calls?

Thank you. That’s very clever idea, but there are no such statements inside the model. However, a simpler model causes no such errors and works amazingly quick on a GPU. I’ll have to carefully examine the more sophisticated model.

I found the fix, although I’m still not sure why the problem occurred. The model creates an instance of an ConvLSTM cell, which initializes a tensor that needed to be manually transferred to CUDA.