I have one question about the dimension of a tensor
- How do I convert my model output torch.Size([2560, 128, 128]) to torch.Size([2560,128]) using convolutions?
I have one question about the dimension of a tensor
You could use an nn.Conv1d
layer with a kernel size of 128
and squeeze the output tensor in dim2
afterwards:
conv = nn.Conv1d(in_channels=128, out_channels=128, kernel_size=128)
x = torch.randn(2560, 128, 128)
out = conv(x)
out = out.squeeze(2)
print(out.shape)
# torch.Size([2560, 128])