How to handle variable input lenghts for tansformer encoder?

I have variable text lenghts and want to use encoder like in this example
https://pytorch.org/tutorials/beginner/transformer_tutorial.html
For RNNs I should use pack_padded_sequence, what should I do for transformer?
Or it is ok for transformer to get zeros in inputs?