Expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]

Not sure what im doing wrong, I can run a whole dataload through the model but if I try to pass one image in eval mode I get the error

expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]

x,y = train_dataset[7000] # Image 7,000 of 50,000

# CNN Model (2 conv layer)
class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.layer1 = nn.Sequential(
            nn.Conv2d(1, 16, kernel_size=5, padding=2),
            nn.BatchNorm2d(16),
            nn.ReLU(),
            nn.MaxPool2d(2))
        self.layer2 = nn.Sequential(
            nn.Conv2d(16, 32, kernel_size=5, padding=2),
            nn.BatchNorm2d(32),
            nn.ReLU(),
            nn.MaxPool2d(2))
        self.fc = nn.Linear(7*7*32, 10)
        
    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = out.view(out.size(0), -1)
        out = self.fc(out)
        return out
        
cnn = CNN()

cnn.eval()  # Change model to 'eval' mode
cnn(x)
2 Likes

I was seeing this because I was missing a dimension in my tensor, I suspect that’s what you’re seeing too. When you pass your full dataset the shape of x is torch.Size([50000, , , ]). When you index it down to one sample it’s shape is torch.Size([, , ]), but you want it to be torch.Size([1, , , ]).

Two things you can try:
x,y = train_dataset[7000:1]
x,y = x.unsqueeze(0), y.unsqueeze(0)

1 Like

Thanks, In the end I was able to use (.view(1,1,28,28)) to get the size you suggested.

4 Likes