What's the best way to resize tensors for alignment purpose?

In my project, I need to feed a tensor of shape [10, 17] to a transformer layer, which means there are 10 time stamps and each time stamp is a [1x17] vector.

Now, I have two tensors X1=[5, 64] and X2=[5, 17], and I hope to concatenate them together into a [10, 17] which will be consumed by the transformer layer.

Currently, I am thinking if I can convert X1 to [5, 17]. Maybe it can be achieved by applying a torch.nn.Linear(64,17) to each column of X1, but I am not sure how to implement it nicely with torch. Could someone give me some help?

Using an nn.Linear layer could be a valid approach. You could add it as a trainable layer into your model and apply it in the forward function via:

def forward(self, X1, X2):
    X1 = self.linear(X1)
    # X1.shape = [5, 17]
    x = torch.cat((X1, X2), dim=0)
    # x.shape = [10, 17]
    x = self.layer(x)
    ...

Thanks a lot. But I was actually struggling with the self.linear() layer here.
I know if X1 is a 1D vector I can do this:

X1 = torch.rand(1, 64)
self.linear = torch.nn.Linear(64,17)
X1 = self.linear(X1)

But I am not sure how to implement it when X1 = torch.rand(5, 64)

In the same way as dim0 of the input tensor is treated as the batch dimension:

X1 = torch.rand(5, 64)
linear = torch.nn.Linear(64,17)
X1 = linear(X1)
print(X1.shape)
# torch.Size([5, 17])

I don’t know if you really want to stack the batch dimensions afterwards and double the batch size, but this part should work.

1 Like