# What is the meaning of trailing dimensions?

I was reading the documentation and I came across the following sentence:

• is any number of trailing dimensions, including none.

what is the meaning of “trailing dimensions”? May I also get an example to clarify the point?

for context:

Packs a Tensor containing padded sequences of variable length.

`input` can be of size `T x B x *` where T is the length of the longest sequence (equal to `lengths` ), `B` is the batch size, and `*` is any number of dimensions (including 0). If `batch_first` is `True` , `B x T x *` `input` is expected.

For unsorted sequences, use enforce_sorted = False. If `enforce_sorted` is `True` , the sequences should be sorted by length in a decreasing order, i.e. `input[:,0]` should be the longest sequence, and `input[:,B-1]` the shortest one. enforce_sorted = True is only necessary for ONNX export.

I actually meant to ask about `pad_sequence` https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_sequence

If you have an input of shape `(T, B)`, you will get a packed sequence with shape `(sum of lengths,)` as data.
If you have an input of shape `(T, B, F_1)`, you will get a packed sequence with shape `(sum of lengths, F_1)` as data.
If you have an input of shape `(T, B, F_1, F_2)`, you will get a packed sequence with shape `(sum of lengths, F_1, F_2)` as data.

So `*` is a wildcard (similar to its use in file glob masks) representing any possible value of `input.shape[2:]`.

Best regards

Thomas

what is sum of lengths? is it `T*B`? Who’s lengths?

also my question is in the context of padding not packing…so idk why packing is relevant…

pad_sequence stacks a list of Tensors along a new dimension, and pads them to equal length. For example, if the input is list of sequences with size L x * and if batch_first is False, and T x B x * otherwise.

B is batch size. It is equal to the number of elements in sequences. T is length of the longest sequence. L is length of the sequence. * is any number of trailing dimensions, including none.

Yeah, well, you linked `pack_padded_sequence` before.
There it is the same, except that the lengths (which is an input to `pack_padded_sequence`) are not given as a parameter but are implicit in `[len(s) for s in sequences]` the sum of lengths is then the sum of the `lengths` parameter (`sum(lengths)`) or `sum([len(s) for s in sequences])`.

Best regards

Thomas

AFAIK this means that the resulting tensor can have an arbitrary number of dimensions with any shape after T and B dimensions.

For example, it could be a tensor of sizes:

1. `torch.size([T, B])`
2. `torch.size([T, B, 10]`
3. `torch.size([T, B, 25, 500])`
4. and so on…

a real runnable example would be useful…

From the documentation:

``````>>> from torch.nn.utils.rnn import pad_sequence
>>> a = torch.ones(25, 300)
>>> b = torch.ones(22, 300)
>>> c = torch.ones(15, 300)
torch.Size([25, 3, 300])
``````

Can be mutated to

``````>>> from torch.nn.utils.rnn import pad_sequence
>>> a = torch.ones(25, 300, 10)
>>> b = torch.ones(22, 300, 10)
>>> c = torch.ones(15, 300, 10)
torch.Size([25, 3, 300, 10])
``````

to

``````>>> from torch.nn.utils.rnn import pad_sequence
>>> a = torch.ones(25, 300, 10, 5)
>>> b = torch.ones(22, 300, 10, 5)
>>> c = torch.ones(15, 300, 10, 5)
torch.Size([25, 3, 300, 10, 5])
``````

to

``````>>> from torch.nn.utils.rnn import pad_sequence
>>> a = torch.ones(25, 300, 10, 5, 7)
>>> b = torch.ones(22, 300, 10, 5, 7)
>>> c = torch.ones(15, 300, 10, 5, 7)
I think you get the rough idea? `*` replaces `(300,)` or `(300, 10)` or `(300, 10, 5)` or `(300, 10, 5, 7)`.