Hello everyone, hope you are having a great time.
I wanted to create an autoecoder. a simple one. if my memory serves me correctly, back in the day, one way to create an autoencoder was to share weights between encoder and decoder. that is, the decoder was simply using the transpose of the encoder. aside from the practicality of this and whether or not this was for the best or the worst, can you please help me do this?
based on this discussion, I tried doing :
self.decoder.weight = self.encoder.weight.t()
and this wont work. and I get :
TypeError : cannot assign ‘torch.FloatTensor’ as parameter ‘weight’ (torch.nn.Parameter or None expected)
So I ended up doing:
class AutoEncoder(nn.Module): def __init__(self, embeddingsize=40): super().__init__() self.encoder = nn.Sequential(nn.Linear(28*28, embeddingsize), nn.Tanh()) self.decoder = nn.Sequential(nn.Linear(embeddingsize, 28*28), nn.Sigmoid()) self.decoder.weight = nn.Parameter(self.encoder.weight.t()) def forward(self, input): .... return output
the network trains and I get no errors, but I’m not sure if it uses the very same weights for both of them or the initial weights are simply used as initial values and
nn.Parameter() simply creates a brand new weights for the decoder!
Any helps in this regard is greatly appreciated and Thanks a lot in advance