RuntimeError: Calculated padded input size per channel: (2 x 130 x 130). Kernel size: (3 x 3 x 3). Kernel size can't be greater than actual input size


I am using unet to reconstruct an image from 6 features, the input tensor in (batch size, channels=6, x, y, z). z is 1,2 or 3. x, y < 64.

You can find a summary of my model summary attached

while trying to train I receive the padding error, any help?
thanks in advance

The input activation to a conv layer is too small since it’s [depth=2, height=130, width=130] while the kernel size of [3, 3, 3]. Make sure the depth, i.e. x in your case, is large enough for the model.