Padding semantics of convolution_backward

Hi! I have a question about backward conv padding. Basically, I’m following this to construct:

>>> grad_out = torch.rand([8,256,5,7,5])
>>> image = torch.rand([8,128,10,14,10], requires_grad=True)
>>> kernel = torch.rand([256,128,3,3,3], requires_grad=True)
>>> print(torch.ops.aten.convolution_backward(grad_out, image, kernel, None, (2, 2, 2), (1, 1, 1), (1, 1, 1), False, [0], 1, (True, False, False))[0].shape)
torch.Size([8, 128, 10, 14, 10])

Seeing that convolution_backward calls check_shape_forward I assumed it followed forward conv padding semantics, but I cannot reconcile the spatial dimensions of the output of this op with either Conv3d or ConvTransposed3d padding:

>>> (10+2*1-1*(3-1)-1)//2 + 1
5
>>>> (10-1)*2-2*1+1*(3-1)+1
19

The fact that the out_padding is seemingly never set doesn’t really help either :sweat_smile: any pointers to what’s going on or what I’m missing?