Padding order is filpped W <-> H - Is it?

Either I do not understand the rationale for the pad input order or there’s a flip.
As I understand the input to pad would be (height_0, height_k, width_0, width_k), H&W ordered just like torch.Tensor. But that’s not what I see.
version: ‘0.4.1.post2’

zero = torch.zeros(1,1,4,3)
cpad = nn.ConstantPad2d([0,2,0,0], -1)
padded = cpad(zero)
print(padded.size())

I would expect my result to be padded by 2 in the Height dimension, ie. size = (1,1,6,3) but I get
torch.Size([1, 1, 4, 5])

perhaps clearer example

zero = torch.zeros(1,1,4,3)
cpad = nn.ConstantPad2d([2,2,0,0], -1)
padded = cpad(zero)
print(padded.size())

Returns
torch.Size([1, 1, 4, 7])

thank you

The input to the padding layer is defined as [left, right, top, bottom]:

padding ( int , tuple ) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom)

In your case you are setting the padding to right (left + right in the second example), to that your width will be increased.

Yes, I see that - IMO if tensors are defined as NCHW, padding ought to be defined as [top, bottom, left, right]. Unless theres a reason in cuda why this is useful, I dont see how it makes sense.

A simple use case, I subtract one tensor by another tensor; then use the values to pad the smaller tensor; to use that information in my padding I have to flip the order. As said. I dont understand the rational. Ok, so no bug but odd design.