How to transpose a network

I am using the code of a variational auto-encoder from here. This is the relevant code:

class VAE(nn.Module):
    def __init__(self):
        super(VAE, self).__init__()

        self.fc1 = nn.Linear(784, 400)
        self.fc21 = nn.Linear(400, 20)
        self.fc22 = nn.Linear(400, 20)
        self.fc3 = nn.Linear(20, 400)
        self.fc4 = nn.Linear(400, 784)

    def encode(self, x):
        h1 = F.relu(self.fc1(x))
        return self.fc21(h1), self.fc22(h1)

    def reparametrize(self, mu, logvar):
        std = logvar.mul(0.5).exp_()
        if torch.cuda.is_available():
            eps = torch.cuda.FloatTensor(std.size()).normal_()
        else:
            eps = torch.FloatTensor(std.size()).normal_()
        eps = Variable(eps)
        return eps.mul(std).add_(mu)

    def decode(self, z):
        h3 = F.relu(self.fc3(z))
        return F.sigmoid(self.fc4(h3))

    def forward(self, x):
        mu, logvar = self.encode(x)
        z = self.reparametrize(mu, logvar)
        return self.decode(z), mu, logvar, z

Suppose that, during test time, I would like to transpose the decode function - that is, reverse that part of the network: feed it an input of size (400, 784) and get an output of (20, 400). Is there a way to transpose a network (without manually copying the weights)?

Thanks!

Hi,

Unfortunately there is no way to do this, mainly because for most operation, inverting them is not easy to do or easily defined.
You will need to write this by hand.

1 Like

Thanks @albanD ! When you say “by hand” - you mean manually defining a new network and copying the weights? (just trying to make sure I didn’t miss something)

Well you don’t really need to create a new net. In your current net you could have a decode_transpose method that does what you want.