How to declare a torch with unknown value in one dimension

I have a list of sentences and I am convert the list to a 3d tensor. The first dimension represents number of sentences, second dimension represents number of words and third dimension represents word embedding size.

The problem is, number of words can vary in sentences. I tried to create a 3d tensor as follows.

all_sentences1 = torch.FloatTensor(len(instances), None, args.emsize)

But this gives me the following error.

TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (int, NoneType, int), but expected one of:
 * no arguments
 * (int ...)
      didn't match because some of the arguments have invalid types: (int, NoneType, int)
 * (torch.FloatTensor viewed_tensor)
 * (torch.Size size)
 * (torch.FloatStorage data)
 * (Sequence data)

How can I declare a 3d tensor?

Tensors must be declared with specific sizes in each dimension, so you need to pad the sentences so that they all have the same length, then use pack_padded_sequence if you’re running them through an RNN.

I can pad the sequence manually but is there any way to pad using any torch function? Moreover, can you point me to any tutorial that explains what is pack_padded_sequence because I have no idea about it.


I have just seen the documentation of pack_padded_sequence and pad_packed_sequence but its not clear for me. Can anyone explain how to use these two functions?

The notes here may be helpful for using the packing functions
Depending on your use case, the torchtext library may be helpful with creating an iterable dataset object containing your sentences; let me know if you have issues with using it (documentation is in the code’s docstrings).

You need to understand that PyTorch works differently than static graph frameworks. You can’t define “placeholders” with unspecified sizes, but you can pass in tensors of different sizes to your model without any modifications. Instead of reusing a single placeholder, you always have to operate on real data.