I have pre-trained encoder (trained with only classification loss. no decoder and no reconstruct loss here)
If I flood input image to encoder, there will be latent space as output.
I want to see what the image looks like that the latent space can generate.
- Is it impossible to construct image from latent space without training decoder?
Here is the pseudo code I think to do this.I’ not sure this way works fine.
encoder = model.encoder().cuda() encoder.load_state_dict(torch.load('my_encoder.pt') class New_Decoder(nn.Module): ... Define decoder ... return out decoder = Decoder().cuda() ... enc_out = encoder(input_image) dec_out = decoder(enc_out) ...
- If trained decoder is needed.
I don’t want reconstruction loss affect to my model.
I want to use only classification loss to train the encoder.
Any way to do this?