Does padding affect the sign change in convolution filter output?

Hey guys, I have observed a thing here that I want to discuss and clarify about. Lets say you have a tensor h**2 so h**2>=0 also you have a kernel whose values are all non-negative. w>=0. Now, when I do the convolution operation F.conv2d(h**2,w), the result is all non-negative.

However, when do the same operation with padding=1, i.e, F.conv2d(h**2,w,padiing=1), I am getting some negative values as well which i checked by doing

assert torch.all(F.conv2d(h**2,w,padding=1)>=0)

Why does padding affect the sign change?