Autoencoders in Pytorch

Would Pytorch support something like this? How does one go about implementing a simple Autoencoder?

class Encoder(nn.Module):
    def __init__(self):
        super(Encoder, self).__init__()
        self.fc1 = nn.Linear(784, 32)

    def forward(self, x):
        return F.sigmoid(self.fc1(x))

class Decoder(nn.Module):
    def __init__(self):
        super(Decoder, self).__init__()
        self.fc1 = nn.Linear(32, 784)
        
    def forward(self, x):
        return F.sigmoid(self.fc1(x))

class AutoEncoder(nn.Module):
    def __init__(self):
        super(AutoEncoder, self).__init__()
        self.fc1 = Encoder()
        self.fc2 = Decoder()

    def forward(self, x):
        return self.fc2(self.fc1(x))

model = AutoEncoder()
optimizer = optim.Adam(model.parameters(), lr=0.5)
for epoch in range(1, 201):
    train(epoch)
    test(epoch, validation)

Just looking for the simplest possible implementation of an AutoEncoder here

Yes, that should work.

If you really want to do the simplest, I would suggest:

class Autoencoder(nn.Module):
    def __init__(self, ):
        super(Autoencoder, self).__init__()
        self.fc1 = nn.Linear(784, 32)
        self.fc2 = nn.Linear(32, 784)
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        x = self.sigmoid(self.fc1(x))
        x = self.sigmoid(self.fc2(x))
        return x
3 Likes

@alexis-jacq I need to access the intermediate data… Can I do that in your implementation?

@apaszke I thought it would work too, but it says

matrices expected, got 4D, 2D tensors at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/TH/generic/THTensorMath.c:857

In that case your approach seems simpler. You can even do:

encoder = nn.Sequential(nn.Linear(782,32), nn.Sigmoid())
decoder = nn.Sequential(nn.Linear(32,732), nn.Sigmoid())
autoencoder = nn.Sequential(encoder, decoder)

@alexis-jacq I want a auto encoder with tied weights, i.e. weight of encoder equal with decoder. How to implement it?

So you want a kind of balanced autoencoder, where Encoder = Transpose(Decoder)? In that case, I would do something like this:

class BalancedAE(nn.Module):
    def __init__(self, ):
        super(BalancedAE, self).__init__()
        self.encoder = nn.Parameter(torch.rand(size_input, size_output))

    def forward(self, x):
        x = torch.sigmoid(torch.mm(self.encoder, x))
        x = torch.sigmoid(torch.mm(x, torch.transpose(self.encoder, 0, 1))
        return x
2 Likes