Pack_padded_sequence on gpu

When I use torch.nn.utils.rnn.pack_padded_sequence() in my model and move the model to a gpu, I need to move the input data to gpu as well. But the arguments of torch.nn.utils.rnn.pack_padded_sequence() include a list of lengths, how can I move a list to gpu? .cuda() method only works for torch.Tensor.

torch.nn.utils.rnn.pack_padded_sequence() returns a PackedSequence instance which has a .cuda() method. Does that not work?

I want to move the input of torch.nn.utils.rnn.pack_padded_sequence( ) to gpu, which includes a list

Can’t you just do this?

on_gpu = torch.nn.utils.rnn.pack_padded_sequence(input).cuda()

or does that come with disadvantages that I am not seeing?

According to the source code pack_padded_sequence converts the lengths list to a Variable containing a LongTensor.

if isinstance(lengths, list):
    lengths = Variable(torch.LongTensor(lengths))

so you could just do that to your lengths list before using pack_padded_sequence.

Ok, it works. Thanks a lot.

Sorry, according to the source code on official website, I can’t find the code you provided, it seems like pack_padded_sequence() use lengths as a list directly.

My mistake.

pack_padded_sequence takes three arguments (input, lengths, batch_first=False).

If input is on the gpu, then the list steps will contain Variables stored on the gpu, which is what matters for performance. The list batch_sizes won’t be stored on the gpu, but I don’t think that matters for performance.