ConvTranspose2d VS Bilinear upsample


I was wondering if someone could tell me what’re the differences between

ConvTranspose2d(group=in_channel) and Upsample(mode='bilinear')


Upsample will use the mode to “mathematically” upsample the activation (no training), while ConvTranspose2d will use trainable filter kernels.

That make sense. Is there any possible to convert ConvTranspose2d to Upsample?

You could replace the layer and try to make sure to keep the same output shape.