I am getting the error: `Broadcast function not implemented for CPU tensors`

. The error is thrown at the forward pass of my model. I printed the input tensor to the model and it gives: `Variable containing: ... [torch.cuda.FloatTensor of size 1024x1024 (GPU 0)]`

My setup are 2 GPUs. My model is DataParallel.

When I run my model with `CUDA_VISIBLE_DEVICES=1`

, I do not have this problem. But I would really like to utilize both the GPUs. Any ideas how **why** this error is given and **how** to prevent it?

You have some parameters which are not on the GPU. Try calling `model.cuda()`

7 Likes

I got the same problem, did you solve it?

You can try model.cuda(), like colesbury says.

I en-counted this error when inference model on CPU-mode. How to solve it

Hi, I encounter the same problem when I use 4 GPUs to train the model and try to convert the model to cpu. Do you have any solutions? Thanks!