nn.AdaptiveLogSoftmaxWithLoss gives unknown error for nn.DataParallel

I created a model :

model= nn.DataParallel(model, device_ids=[0,1,2,3])
model=model.to(device)

I used nn.AdaptiveLogSoftmaxWithLoss :

self.out= nn.AdaptiveLogSoftmaxWithLoss(300, 27624, cutoffs=[round(27624/15),3*round(27624/15)],div_value=4)

The error shows:

Traceback (most recent call last):
File “RUN2.py”, line 237, in
qnet.model_fit()
File “RUN2.py”, line 73, in model_fit
loss = self.model(input_F)
File “/home/akmmrahman/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/home/akmmrahman/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py”, line 153, in forward
return self.gather(outputs, self.output_device)
File “/home/akmmrahman/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py”, line 165, in gather
return gather(outputs, output_device, dim=self.dim)
File “/home/akmmrahman/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py”, line 67, in gather
return gather_map(outputs)
File “/home/akmmrahman/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py”, line 62, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
TypeError: new() missing 1 required positional argument: ‘loss’

Doesn’t nn.AdaptiveLogSoftmaxWithLoss support multiple GPUs? It works fine for a single GPU. It would be very helpful if someone could answer it. Thanks.

The best place to report a potential bug is pytorch github :slight_smile: