Why isn't output shape of Conv2d equal to input shape?

I have a 96x96 input image. My first layer is nn.Conv2d(1, 16, 3, padding = 1).
The equation for the output shape in the docs is: height + 2*padding - dilation * (kernel_size - 1) - 1 + 1 assuming a stride of 1.
So my output should be 96 + 2 - 1 * (3 - 1) - 1 + 1 = 96, the same dimension as the input.
But when I print(x.shape) after applying the layer, it’s torch.Size([1, 16, 48, 48]).
What’s going on, I thought a padding of 1 would make it the same dimensions, but somehow it split in two? It’s 48 instead of 96. Why is this?

Nevermind, I forgot I was pooling (2, 2) in the same line I was convulsing.