Hi,
I have a problem understanding these 2 utilities. Not able to figure out what it does.
For eg. I was trying to replicate this with example from Simple working example how to use packing for variable-length sequence inputs for rnn
I have followed the pytorch documentation and coded with batch First
import torch import torch.nn as nn from torch.autograd import Variable batch_size = 3 max_length = 3 hidden_size = 2 n_layers =1 num_input_features = 1 input_tensor = torch.zeros(batch_size,max_length,num_input_features) input_tensor[0] = torch.FloatTensor([1,2,3]) input_tensor[1] = torch.FloatTensor([4,5,0]) input_tensor[2] = torch.FloatTensor([6,0,0]) batch_in = Variable(input_tensor) seq_lengths = [3,2,1] pack = torch.nn.utils.rnn.pack_padded_sequence(batch_in, seq_lengths, batch_first=True) print (pack)
Here I get output as
PackedSequence(data=Variable containing: 1 4 6 2 5 3 [torch.FloatTensor of size 6x1] , batch_sizes=[3, 2, 1])
I could retrieve the original sequence back if I do
torch.nn.utils.rnn.pad_packed_sequence(pack,[3,2,1])
which is obvious.
But can somebody help me to understand how and why we got that output āpackā with size (6,1). Also the whole functionality, in general, I mean why we need these 2 utilities and how it is useful.
Thanks in advance for the help.
Cheers,
Vijendra