What is the difference between ConvTranspose2d and Upsample in Pytorch?
To implement UNet in Pytorch based on the model in this paper for the first upsampling layer some people used
Transpose is learning parameter while Up-sampling is no-learning parameters. Using Up-samling for faster inference or training because it does not require to update weight or compute gradient
So according to you in the Unet paper what did the author used for Upsampling, nn.Upsample or ConvTranspose2d?
and can you also explain which one to use among above 2 in what condition
In the Unet paper, they’ve stated that “up-convolutions” were used. I would assume that this means they would have implemented/used ConvTranspose2d.
As for which one to use, it really depends on the network you’re designing. If you are sure of the kind of upsampling that needs to be done (bilinear, etc.) then you can use nn.Upsample. However, if you think it would be important to “learn” how to upsample instead of just using a hardcoded method then the trainable parametes in ConvTranspose2d would be useful.