3D Convolution Data mis-shaped

Hi guys.

I’m trying to train a 3D conv on an [32,32,32] image with 3 channels.

My batch (of size 1 at the moment) has shape (the channels are the last dimension):

torch.Size([1, 32, 32, 32, 3])

I get this error when doing layer.forward:

RuntimeError: Given groups=1, weight of size [16, 3, 5, 5, 5], expected input[1, 32, 32, 32, 3] to have 3 channels, but got 32 channels instead

My layer is defined by:


nn.Conv3d(in_channels=3,out_channels=16,kernel_size=5, padding=2)

Where am I going wrong?

Oh it looks like my input should be 3, 32, 32, 32