Pack_padded_sequence with dynamic sequence lengths

Pytorch offers a pack_padded_sequence function for RNNs which enables efficient batching of varying-length sequences when we know the length of the sequences in advance, saving computation on sequences that end earlier in the batch.
In my problem case, the length of each sequence is not known in advance, but is decided while the batch of sequences is processed (e.g. based on the intermediate network output).

(1) Could pack_padded_sequence or a different existing PyTorch function be used to efficiently process such a batch of on-line-varying sequence lengths?
(2) If not, would such a function be possible to implement in PyTorch?