Hi,
I use the model with DataParallel and I receive this error:
RuntimeError: arguments are located on different GPUs
It comes from this line:
spec_m = torch.matmul(spec_f, self.fb)
where self.fb is a nn.Parameter of the model and it always stay on the cuda:0, while spec_f could be cuda:1 (as expected as I wrapped the model into nn.DataParallel).
I use pytorch 1.0 nightly and curious why does it happen?