Inverse Layer in Pytorch

I am trying to train an auto-encoder decoder type of network where I have few set of convs and then a flatten/reshape layer to a single vector and again want to reconstruct the image back.

I had used lasagne previously and it had a layer called as the Inverse Layer http://lasagne.readthedocs.io/en/latest/modules/layers/special.html#lasagne.layers.InverseLayer
which is useful to the decoder network.

I was wondering if there is a similar thing like the Inverse Layer in pytorch?

Hi,

there isn’t one in particular, but the layers the lasagne docs name are all there:

  • Linear is Linear again with input/output dims swapped (it’s transposed),
  • Convolutions have transposed layers, e.g. ConvTranspose2d,
  • for pooling layers, there are unpooling layers, e.g. MaxUnpool2d.

Have good fun with your project!

Best regards

Thomas

2 Likes

But does this handle the non linearity? In case I have my conv layer as follows:
l1 = F.sigmoid(self.conv1(x))
l2 = F.sigmoid(self.conv2(l1))
...
fc = ... # have set of fc layers using nn.linear and then reconstuct them back by interchanging the input and output dimensions.

reconstruct_2 = F.sigmoid(self.deconv2(fc))
reconstruct_1 = F.sigmoid(self.deconv1(reconstruct_2))

Is the part of reconstruct_2 and reconstruct_1 correct?

Hello,

it does not handle the nonlinearity.
From the description of lasagne’s InverseLayer, it uses the derivative, so essantially, it effectively provides the backpropagation step of the layer it is based on. You would need to do this yourself (using d/dsigmoid(x) = sigmoid(x)*(1-sigmoid(x)), so reconstruct_2 = self.deconv2(fc*(1-fc)) or so) or use the torch.autograd.grad function.
My understanding is that for the backward, you would want the nonlinearity-induced term before the convolution.
This is, however, not necessarily something that reconstructs anything.

Best regards

Thomas