import torch import torch.nn as nn import torch.nn.functional as F # input = torch.LongTensor(4,4).random_(0, 50) input = torch.randn(4,4) print(input) m = nn.MaxPool2d(kernel_size=2, stride=2) output = m(input) print(output)
I created the example that will not work, but when I set
input = torch.randn(1,4,4) it will work. Can someone explain the logic behind this decision?
I know that back few days ago, when I loaded the image it typically had (1,128,128) dimension if it was single channel or (3,128,128) if it had 3 channels (RGB).
Is this perhaps the logic why we insist that
MaxPool2d requires rank 3 or 4 tensors?