Using this: nn.ConvTranspose3d(64, 3, kernel_size=4, stride=2, padding=(1, 1, 1))
I get correct the output channels and the number of frames, but the frames size are 64x64: torch.Size([1, 3, 16, 64, 64])
What should I change in order to get torch.Size([1, 3, 16, 112, 112]) ?