Unpooling or Transposed convolution for Deconvolution network

Hi guys,

I am wondering the influences of both the unpooling operator and Transposed convolution
for upsampling the feature maps in the Deconvolution networks in output image prediction tasks.

Are there any pros and cons between two operators?
It would be great if you introduce some papers regarding this issue.

Thanks in advance!!

Hi,

The main difference is that transposed convolutions are similar to ordinary convolutions which means they can learn weights based on your tasks but for sure you need to also train them (time and power). But unpooling operator is just a math operator and does not do any learning.

Sometimes in depend on your task, for instance, you can do something similar to pooling using stride in convolutions but still you can see many SOTA papers that use pooling layers. So, it is hard to say one is better than the other.

If you just google “unpooling vs deconvolution” you will find so many discussions about it.

Here are some:

  1. https://arxiv.org/pdf/1311.2901v3.pdf
  2. https://stats.stackexchange.com/questions/252810/in-cnn-are-upsampling-and-transpose-convolution-the-same
  3. https://github.com/facebookresearch/SparseConvNet/issues/75

bests