Can packed sequence be put in a linear layer?

I am right now using the PackedSequence as an input to the LSTM network. However, the hidden layer and output have different lengths of features, which I have to transform the output of the LSTM to by a linear layer with softmax activation. The question is can I get this done by using the PackedSequence? The reason is that if I simply pad the output (PackedSequence) of the LSTM; then, the new output will have a lot of 0 in it, which will later be transformed to none-zero values by the linear transformation and softmax funciton. In this case, the output is uncorrect becasue the padded parts of the sequence should always stay zero.