Difference between nn.Upsample and nn.ConvTranspose2d

Since for nn.Upsample, it’s possible to deal with 2D Tensor, so what’s the difference here between the upsample and the convtranspose2d. Thank you

1 Like

While nn.Upsample uses some interpolation technique, nn.ConvTranspose uses trainable filters to create your output (similar to vanilla conv layers).

Thank you for your information, can you explain the pros and cons of these two function.
Is the upsample recover more context information or the ConvTranspose?

Generally speaking, the ConvTranpose layer might learn some features as it’s using trainable parameters, while Upsample just interpolates.
The former approach would thus have more parameters (more capacity) and might therefore overfit easier.

I can’t really tell which approach works better in which situation, as I’ve seen both methods used for certain use cases.
While it seems that ConvTranpose layers are preferred in GANs, I’ve seen some models using Upsample performed better for segmentation tasks. This is just my biased observation, so this is not a recommendation to choose one over the other.
You should try both approaches and see, how your model performs.

3 Likes

Thank you, that helps a lot.