Convolutional Out shape padding

img = torch.randn(1,3,32,32)
conv = torch.nn.Conv2d(in_channels=3,out_channels=6,kernel_size=4,stride=2,padding=1)
conv(img).size()

Why does this code does not raise an error? If the height and width are 32, with a kernel size = 4 and a stripe=2, the the second last group of pixels in the first row will be 29-30-31-32 and the last 31-32-33 and one pixel is missing to complete the kernel_size.
Are extra pixels automatically discarded or something else happens?

I thought padding were added only on the right and bottom column while instead is added in all four dimension…now it all makes sense