Feeding 1d PackedSequence data to an LSTM

I am trying to train an LSTM on audio signal data. I gave used pad_sequnce() and pack_padded_sequence() to get resultant PackedSequence data.

However this data is 1 dimensional. I have checked it using x.data.shape in forward() function and it is a 1d tensor. (x here is a PackedSequence data)

And I’ve used batch_first=True parameter also when defining lstm.

I’m getting this error:

~/.conda/envs/ml/lib/python3.8/site-packages/torch/nn/modules/rnn.py in check_input(self, input, batch_sizes)
    172         expected_input_dim = 2 if batch_sizes is not None else 3
    173         if input.dim() != expected_input_dim:
--> 174             raise RuntimeError(
    175                 'input must have {} dimensions, got {}'.format(
    176                     expected_input_dim, input.dim()))

RuntimeError: input must have 2 dimensions, got 1

Would adding a dummy dimension to this data work?

How can I do this to PackedSequence data?

I solved this by adding a newaxis to sequence data before padding and packing in collate_fn() of dataloader.