Output of nn.functional.conv2d

I’m using the same layer dimensions as in AlexNet and the input images have the dimension ([4, 3, 227, 227]) (batch size = 4), but make modifications in the convolutional layers. Precisely, I’m defining my own convolutional layer exactly as a nn.Conv2d, but the output of the first convolutional layer of “my” AlexNet has not the shape ([96, 3, 11, 11]) as it is supposed to have, but ([4, 96, 217, 217]).
In other words,
x = F.conv2d(input=input, weight=y)
is incorrect, where y has the dimensions ([96, 3, 11, 11]), input`` ([4, 3, 227, 227]), and x ([4, 96, 217, 217]).

Can anyone see my fault? Would highly appreciate any help!
Thank you.

I am facing the same issue when using the F.conv2d layer from the functional module. Did you resolve this issue?

The kernel size is defined as [96, 3, 11, 11] corresponding to [nb_filters, in_channels, h ,w], so given a batch size of 4, the result is correct.
Could you explain your use case a bit?

I was able to resolve the issue. I had missed the padding value in the F.conv2d layer, therefore the size of the output features were reducing.