I just want to make sure that what I’m doing is correct because I could not find the same question asked before, but if I structure my model as below, with the encoder as a separate module used within the autoencoder module:
When I perform unsupervised_optimizer.step() the encoder part of the autoencoder will be trained as well right, because that should be part of the computational graph?
Yes, the backpropagation works for both encoder and decoder. You can test that by looking at the list of paramaters in unsupervised_model.parameters():
p = list(unsupervised_model.parameters())
for w in p:
print(w.shape)
and these are all the parameters for both encoder and decoder.
Although, it’s a bit unusual to pass encoder to the autoencoder. You can define the encoder object inside Autoencdoer. The following code will achieve the same thing, but more readable.
Thank you very much for the reply. The reason I was doing it this way was so that I could easily access the encoder model once it was trained for further supervised training. Is it better to structure my model as you have and then at the end recover the supervised model with: supervised_model = unsupervised_model.encoder?