RuntimeError: expected stride to be a single integer value or a list of 4 values to match the convolution dimensions, but got stride=[2, 2, 2]

I am getting this error after I implemented hypernetwork to learn weights for the CNN. This is e.g. how I define the weights:
'conv0_0_weight': torch.Size([c//2, in_planes, 4, 4, 4])

This is how it is used:
x = functional_conv3d(x, weights['conv0_0_weight'], weights['conv0_0_bias'], self.c//2, stride=2, padding=1)

and this is the functional conv3d:

def functional_conv3d(x, weight, bias, out_planes, stride=1, padding=1, dilation=1):
    x = F.conv3d(x, weight, bias, stride=stride, padding=padding, dilation=dilation)
    x = nn.PReLU(num_parameters=out_planes)(x)
    return x

I printed the shapes of the input tensor and weights:

  # input x: torch.Size([2, 3, 128, 128, 128])  
  # conv0_0_weight : torch.Size([2, 128, 3, 4, 4, 4])

The first value (of 2) is the batch size. I checked the documentation and c//2, in_planes, 4, 4, 4 in the weight shape initialization are according to out_channels, in_channels, kT, kH, kW. Is there something I am missing?

The in_channels (128 in your example) do not match the channel dimension of x (3 in your example).
Also, you are using an additional dimension with the size of 3 in your conv weigh, so remove it or drop the first dimension in case it’s supposed to represent the batch dimension, since parameters do not use a batch dim.

1 Like

Thanks for your reply. The problem was related to the fact that the batch size dimension was adding another dimension to the data, since I was using a functional model, causing inconsistency of the dimensions for the conv layers. I added iteration over each element in the batch in the forward(), and after that the issue was resolved.