Broadcast function not implemented for CPU tensors


#1

I am getting the error: Broadcast function not implemented for CPU tensors. The error is thrown at the forward pass of my model. I printed the input tensor to the model and it gives: Variable containing: ... [torch.cuda.FloatTensor of size 1024x1024 (GPU 0)]
My setup are 2 GPUs. My model is DataParallel.
When I run my model with CUDA_VISIBLE_DEVICES=1, I do not have this problem. But I would really like to utilize both the GPUs. Any ideas how why this error is given and how to prevent it?


LSTM example with multuple GPU error: module 'torch' has no attribute 'long'
(colesbury) #2

You have some parameters which are not on the GPU. Try calling model.cuda()


(Max) #3

I got the same problem, did you solve it?


#4

You can try model.cuda(), like colesbury says.