How can I reduce the dimension of 3d tensor to 2d using convolutional filters?

I have one question about the dimension of a tensor

  1. How do I convert my model output torch.Size([2560, 128, 128]) to torch.Size([2560,128]) using convolutions?

You could use an nn.Conv1d layer with a kernel size of 128 and squeeze the output tensor in dim2 afterwards:

conv = nn.Conv1d(in_channels=128, out_channels=128, kernel_size=128)
x = torch.randn(2560, 128, 128)

out = conv(x)
out = out.squeeze(2)
print(out.shape)
# torch.Size([2560, 128])