I’m having this issue when computing losses. It stops at the following line,
loss_gen dtype is torch.float32. The model is in GPU, the line is in training loop not in model. I’m not sure what is happening.
Traceback (most recent call last): File "train.py", line 73, in <module> loss_g = 0.01 * torch.log(1 - output_o) + loss_gen RuntimeError: expected type torch.cuda.FloatTensor but got torch.FloatTensor
Edit: I think I’ve solved it.Output_o was in CPU.