It might be an easy question but I am not familiar with Maxpool layer.
When I use Embedding layer it increases the dimention of tensor.
embedding = nn.Embedding(10, 5)
input = torch.LongTensor([[[1,2,4,5],[4,3,2,9]],[[1,2,4,5],[4,3,2,9]]])
output = embedding(input)
input.seze()
torch.Size([2, 2, 4])
output.size()
torch.Size([2, 2, 4, 5])
I want to add an Maxpool2d(or any other layer) layer to conver my output to
torch.Size([2, 2, 1, 5])
Let say my output vector is:
tensor([[[[7, 0, 0, 3, 6],
[6, 7, 5, 2, 0],
[2, 1, 9, 1, 9],
[1, 5, 8, 6, 1]],
[[4, 7, 2, 4, 5],
[4, 4, 2, 6, 2],
[9, 1, 0, 3, 5],
[5, 7, 6, 5, 8]]],
[[[9, 6, 0, 6, 0],
[8, 9, 7, 0, 2],
[4, 7, 7, 4, 5],
[7, 9, 1, 0, 8]],
[[6, 4, 5, 7, 6],
[2, 2, 4, 9, 4],
[7, 7, 9, 0, 0],
[6, 8, 8, 4, 1]]]])
I want to convert it to :
torch.Size([2, 2, 1, 5])
tensor([[[[7, 7, 9, 6, 9]],
[[9, 7, 6, 6, 8]]],
[[[9, 9, 7, 6, 8]],
[[7, 8, 9, 9, 6]]]])
So I can then convert it to
torch.Size([2, 2, 5])