Question about: Packed RNN with DataParallel

@apaszke @stephenrawls

I have a naive question here, in order to make the above forward function work on GPU, the whole model and input have to be moved to GPU by calling .cuda(), right? But since pack_padded_sequence needs a seq lengths param (which is typically a list object if on CPU), so do I have to convert this list param into a cuda array?

I’m having issues when doing this conversion, like LongTensor(input_lengths).cuda() and pass it to pack_padded_sequence.