Hi,
I am building an autoencoder like this:
class autoencoder(torch.nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder = torch.nn.Sequential(
torch.nn.Linear(dim_input, h_dim1),
torch.nn.ReLU(),
torch.nn.Dropout(),
torch.nn.Linear(h_dim1, Z_dim),
torch.nn.ReLU(),
torch.nn.Dropout())
self.decoder = torch.nn.Sequential(
torch.nn.Linear(Z_dim, h_dim1),
torch.nn.ReLU(),
torch.nn.Dropout(),
torch.nn.Linear(h_dim1, dim_input),
torch.nn.ReLU(),
torch.nn.Dropout())
def forward(self, X):
Z = self.encoder(X)
X = self.decoder(Z)
return X, Z
But weights in the encoder and the decoder are different, how can I make it tied weights (weights in the decoder should be transpose of the encoder weights–parameters of the model are then only the encoder’s weights)?
Another question, in a tied weight autoencoder, if I use dropout for the encoder part for regularization, how should I apply it in the decoder side? To me, it seems in the above code different nodes are being dropped out in the encoder and the decoder sides.
Thanks!