Any idea on how to slice an (N,D) tensor with minibatches?

I was asked to add a GNN module to an existing intent/slot-tagger co-training network. I was given a bipartite graph bearing connections between intent and slot-tags and was supposed to update the embeddings for intent and slot-tags several times per minibatch. I am not very familiar with Pytorch Geometric and what I am trying to do is to generate new embeddings of all nodes by passing in the same graph every time and slice from it. However, the new embeddings is of size (N,D) and the indices have the size (batch_size, L). How can I generate a (batch_size, L, D) tensor accordingly? May I use the from_pretrained method with freeze=False to manuallt update the embeddings each time?

I am sorry for my wierd discription. The problem is about slicing but I also would like to hear suggestions over my approach of implementation, which I am highly difident.

I’m not sure to understand the question completely, so please correct me, if I’m misunderstanding something.

If you would like to finetune an embedding layer, you can use the from_pretrained method and set freese=False, as you suggested.
The shapes should work out of the box:

# Create embedding
N, D = 10, 100
emb = nn.Embedding(num_embeddings=N, embedding_dim=D)

# Create input
batch_size = 5
L = 7
x = torch.randint(0, N, (batch_size, L))

# Forward pass
output = emb(x)
print(output.shape)  # batch_size, L, D