I don’t think there is a single “right way” to achieve this as the proper approach would of course depend on your use case.
You could try these approaches and I’m sure I might miss others:
- slice the tensor
y = x[..., :16]
print(y.shape)
# torch.Size([2, 1, 80, 16])
- index it with a stride of 4
y = x[..., ::4]
print(y.shape)
# torch.Size([2, 1, 80, 16])
- use any pooling (max, avg etc.) layer (the same would also work using adaptive pooling layers):
pool = nn.MaxPool2d((1, 2), (1, 4))
y = pool(x)
print(y.shape)
# torch.Size([2, 1, 80, 16])
pool = nn.AdaptiveAvgPool2d(output_size=(80, 16))
y = pool(x)
print(y.shape)
# torch.Size([2, 1, 80, 16])
- or manually reduce the last dimension with any reduction op (sum, mean, max, …)