How to get torch.Size([1, 3, 16, 112, 112]) from torch.Size([1, 64, 8, 32, 32]) using ConvTranspose3d?

Using this: nn.ConvTranspose3d(64, 3, kernel_size=4, stride=2, padding=(1, 1, 1)) I get correct the output channels and the number of frames, but the frames size are 64x64: torch.Size([1, 3, 16, 64, 64])

What should I change in order to get torch.Size([1, 3, 16, 112, 112]) ?

The solution to this is:

ConvTranspose3d(64, 3, kernel_size=4, stride=(2, 4, 4), bias=False, padding=(1, 8, 8))