Hello everyone,
I am trying to concatenate multiple times in forward pass of encoder.
My embeddings are : [[114,300],[6,50]], where 114 and 6 is “num_embeddings” and 300 and 50 is “embedding_dim”. Below is my code
outer_tensor = []
innner_tensor = []
for train_pair in g_seqs:
input_tensor = train_pair
input_length = input_tensor.size(0)
for ei in range(input_length):
encoder_output = input_tensor[ei]
embedded_features = [embedding(f.view(-1)) for embedding, f in zip(self.g_embedding, encoder_output)]
embedded = torch.cat(embedded_features, dim=1).view(1, 1, -1)
innner_tensor.append(embedded)
embed_inner_tensor = embedded = torch.cat(innner_tensor, dim=0).view(1, 1, -1)
outer_tensor.append(embed_inner_tensor)
graph_embed_tensor = torch.cat(outer_tensor, dim=1).view(1, 1, -1)
return(graph_embed_tensor)
where g_seqs is tensor below
tensor([[[12, 3],[30, 3],[31, 3],[ 0, 0], [ 0, 0]],
[[ 53, 4],[ 14, 4],[35, 4],[ 0, 0], [ 0, 0]],
[[ 33, 2],[ 24, 2],[31, 2],[ 10, 2], [ 0, 0]],
[[ 23, 3],[ 4, 3],[28, 3], [29, 3], [ 2, 3]]])
At the final concatenation “graph_embed_tensor”, I am getting runtime error(where I am trying to concatenate outer_tensor )
i.e. RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 10500 and 21000 in dimension 3
In last concatenation i.e." graph_embed_tensor = torch.cat(outer_tensor, dim=1).view(1, 1, -1)". I am getting all tensors of different dimensions. [1,1,10500] , [1,1,21000], [1,1,31500].
How can I concatenate all these tesors of different dimensions? what would be the correct way to concatenate in this scenario. Any suggestion will be much appreciated.