Channel dimension mismatch while implementing Squeezenet

Error :

RuntimeError: Given groups=1, weight of size [16, 128, 1, 1], expected input[32, 64, 126, 126] to have 128 channels, but got 64 channels instead

The above images are the errors that i got while trying to run a SqueezeNet model , i have tried various means to solve but no idea where it has gone wrong , i have taken the implementation from

Github link

Could anyone help me by telling where i went wrong .
The images dimensions are in batch of 16 → [16,3,512,512]

The error seems to be coming with the second hidden layer somewhere …

The error is raised by a conv layer, which is expecting 128 input channels and returns 16 output channels.
Did you modify the model in any way and if so, could you post the applied changes?

The torchvision implementation work fine for me:

model = models.squeezenet1_0()
x = torch.randn(1, 3, 224, 224)
out = model(x)

model = models.squeezenet1_1()
x = torch.randn(1, 3, 224, 224)
out = model(x)
1 Like