Transposed Convolution

What is the meaning of padding size in Transposed Convolution? Strictly for Conv2dTranspose?

Also, what if we had chance to pad 1’s instead of 0’s? Could this change the output of Conv2dTranspose? I cannot really understand the meaning of padding in the context of Transposed Convolution…