Assertion error in DataParallel when applying to RNN

I’m trying to apply DataParallel to a RNN model.

this is part of my code:

    if use_cuda:
        encoder = encoder.cuda()
        decoder = decoder.cuda()

        encoder = nn.DataParallel(encoder, dim=0)
        decoder = nn.DataParallel(decoder, dim=0)
class EncoderRNN(nn.Module):
    def __init__(self, vocal_size, hidden_size):
        super(EncoderRNN, self).__init__()
        self.hidden_size = hidden_size
        self.embedding = nn.Embedding(vocal_size, hidden_size)
        self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)

    def forward(self, input_batch, input_batch_length, hidden):
        print(input_batch)
        print(input_batch_length)
        print(hidden)
        embedded = self.embedding(input_batch)
        packed_input = nn.utils.rnn.pack_padded_sequence(embedded, input_batch_length.cpu().numpy(), batch_first=True)
        output, hidden = self.gru(packed_input, hidden)
        return output, hidden

    def init_hidden(self, batch_size):
        result = torch.autograd.Variable(torch.zeros(1, batch_size, self.hidden_size))

        if use_cuda:
            return result.cuda()
        else:
            return result

I can guarantee that all inputs are cuda, but I received this error:

Traceback (most recent call last):
  File "train.py", line 156, in <module>
    train_iteration(encoder, decoder, fileDataSet)
  File "train.py", line 122, in train_iteration
    target_indices, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion)
  File "train.py", line 49, in train
    encoder_output, encoder_hidden = encoder(input_batch, input_batch_length, encoder_hidden)
  File "/home/cjunjie/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/cjunjie/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 74, in forward
    return self.gather(outputs, self.output_device)
  File "/home/cjunjie/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 86, in gather
    return gather(outputs, output_device, dim=self.dim)
  File "/home/cjunjie/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 65, in gather
    return gather_map(outputs)
  File "/home/cjunjie/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 60, in gather_map
    return type(out)(map(gather_map, zip(*outputs)))
  File "/home/cjunjie/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 60, in gather_map
    return type(out)(map(gather_map, zip(*outputs)))
  File "/home/cjunjie/anaconda3/lib/python3.6/site-packages/torch/nn/utils/rnn.py", line 39, in __new__
    return super(PackedSequence, cls).__new__(cls, *args[0])
  File "/home/cjunjie/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 57, in gather_map
    return Gather.apply(target_device, dim, *outputs)
  File "/home/cjunjie/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 58, in forward
    assert all(map(lambda i: i.is_cuda, inputs))
AssertionError

I didn’t find any similar problem in Google.

Update:

Content of input_batch:

 2.0000e+00  6.2900e+02  5.4000e+01  ...   0.0000e+00  0.0000e+00  0.0000e+00
 2.0000e+00  1.6759e+04  6.0000e+00  ...   0.0000e+00  0.0000e+00  0.0000e+00
 2.0000e+00  7.2000e+01  3.3500e+02  ...   0.0000e+00  0.0000e+00  0.0000e+00
 2.0000e+00  5.4000e+01  1.2900e+02  ...   0.0000e+00  0.0000e+00  0.0000e+00
[torch.cuda.LongTensor of size (4,2687) (GPU 0)]

input_batch_length:

 1844
 1507
 1219
 1021
[torch.cuda.LongTensor of size (4,) (GPU 0)]

hidden:

( 0 ,.,.) = 
   0   0   0  ...    0   0   0
   0   0   0  ...    0   0   0
   0   0   0  ...    0   0   0
   0   0   0  ...    0   0   0
[torch.cuda.FloatTensor of size (1,4,256) (GPU 0)]

I can’t print “inputs” because of this error:

UnicodeEncodeError: 'latin-1' codec can't encode character '\u22f1' in position 274: ordinal not in range(256)
1 Like

One thing you could do in that function, right before the error, add a line to print the inputs (or run the python debugger).

I updated the content of inputs.

Have you solved the problem? I got exactly the same error.

lib/python3.6/site-packages/torch/nn/parallel/_functions.py you are getting assertion error as the values inside inputs tuple i.e. i.is_cuda are not loaded to GPUs.
Solution- inputs=tuple(map(lambda i: i.cuda(), inputs)) will load all the values inside inputs to the default GPU.