Expected type torch.cuda.FloatTensor but got torch.FloatTensor

I’m having this issue when computing losses. It stops at the following line, output_o and loss_gen dtype is torch.float32. The model is in GPU, the line is in training loop not in model. I’m not sure what is happening.

Traceback (most recent call last):

File "train.py", line 73, in <module>

loss_g = 0.01 * torch.log(1 - output_o) + loss_gen

RuntimeError: expected type torch.cuda.FloatTensor but got torch.FloatTensor

Edit: I think I’ve solved it.Output_o was in CPU.

Yes, exactly. When you transfer your model to GPU, your model output will be on GPU so you have to convert one to another type.
So you can send your tensor to GPU by .to(device) method.