nn.Parameter arguments are located on different GPUs

Hi,

I use the model with DataParallel and I receive this error:
RuntimeError: arguments are located on different GPUs

It comes from this line:

spec_m = torch.matmul(spec_f, self.fb)

where self.fb is a nn.Parameter of the model and it always stay on the cuda:0, while spec_f could be cuda:1 (as expected as I wrapped the model into nn.DataParallel).

I use pytorch 1.0 nightly and curious why does it happen?

is it possible that you set inside the nn.Module a .cuda() in any variable or layer?

Thx for reply, no, I dont use .cuda() at all. I do .to(device) for a model and inputs only