Embedding Layer with 4D input

I have a input with dimension (Batch Size * MAX_NUMBER_SENTENCES * MAX_SENT_LENGTH). Basically a list-of-list-of token indices.

I have initialized an embedding layer as

self.query_embedding = nn.Embedding(len(word_lookup_table), inp_dim)
I have initialized it with pretrained word embeddings. Now when I pass an input of the above dim, I get as output:

(BatchSize * Max_Num_Sents * Max_Sent_Len * Embed_Dim).

Is this the correct output?

Yes, that’s the expected output as shown in the docs.
nn.Embedding layers are adding the embedding_dim dimension as the last additional dimension to the output tensor.

Thank you for your response.