I have a problem understanding these 2 utilities. Not able to figure out what it does.
For eg. I was trying to replicate this with example from https://discuss.pytorch.org/t/simple-working-example-how-to-use-packing-for-variable-length-sequence-inputs-for-rnn/2120
I have followed the pytorch documentation and coded with batch First
import torch.nn as nn
from torch.autograd import Variable
batch_size = 3
max_length = 3
hidden_size = 2
num_input_features = 1
input_tensor = torch.zeros(batch_size,max_length,num_input_features)
input_tensor = torch.FloatTensor([1,2,3])
input_tensor = torch.FloatTensor([4,5,0])
input_tensor = torch.FloatTensor([6,0,0])
batch_in = Variable(input_tensor)
seq_lengths = [3,2,1]
pack = torch.nn.utils.rnn.pack_padded_sequence(batch_in, seq_lengths, batch_first=True)
Here I get output as
[torch.FloatTensor of size 6x1]
, batch_sizes=[3, 2, 1])
I could retrieve the original sequence back if I do
which is obvious.
But can somebody help me to understand how and why we got that output 'pack' with size (6,1). Also the whole functionality, in general, I mean why we need these 2 utilities and how it is useful.
Thanks in advance for the help.