Customize layer or network for using pack_padded_sequence

Hi,
I have been using pack_padded_sequence for packing padded and sorted variable-length of input with RNN and LSTM.

I would like to customize a layer or a network to work with this kind of packed input. For example, if I just want to do a maxpooling or an averaging pooling over a packed input on the dimension where we have variable-length.

def pad_tensor(tensor, length):
    return torch.cat([tensor, tensor.new(length - tensor.size(0), *tensor.size()[1:]).zero_()])
def pad_tensor_list(tensor_list):
    tensor_length = [x.size(0) for x in tensor_list]
    return torch.cat([torch.unsqueeze(pad_tensor(tensor, max(tensor_length)), 0)
                      for tensor in tensor_list], 0), tensor_length
# assume the tensor is sorted
tensor_list = []
tensor_list.append(torch.Tensor(7, 16))
tensor_list.append(torch.Tensor(5, 16))
tensor_list.append(torch.Tensor(3, 16))

# pad the tensor
padded_tensor, tensor_length = pad_tensor_list(tensor_list)

# pack the padded tensor
packed_input = pack_padded_sequence(padded_tensor, list(tensor_length), batch_first=True)

# customized layer ...
maxpooling_for_packed(packed_input)

I understand that we can basically skip packing the input and iterate through the 1st dimension of the tensor and index_select the original length of the tensor, and then apply max or mean pooling on selected tensors.

I am wondering whether if there is a way to design a max, mean pooling or other layers that can natively and efficiently work with packed input. If so, how would you suggest me do it?

2 Likes