Construct image from encoder output

I have pre-trained encoder (trained with only classification loss. no decoder and no reconstruct loss here)
If I flood input image to encoder, there will be latent space as output.
I want to see what the image looks like that the latent space can generate.

  1. Is it impossible to construct image from latent space without training decoder?
    Here is the pseudo code I think to do this.I’ not sure this way works fine.
encoder = model.encoder().cuda()

class New_Decoder(nn.Module):
 ...  Define decoder ...
   return out

decoder = Decoder().cuda()
enc_out = encoder(input_image)
dec_out = decoder(enc_out)
  1. If trained decoder is needed.
    I don’t want reconstruction loss affect to my model.
    I want to use only classification loss to train the encoder.
    Any way to do this?
  1. As far as I know, there is no way to do this automatically. Maybe, you might perform K Nearest Neighbor in latent space to get the nearest image from your dataset.

  2. After the Encoder is trained, you could use the latent vectors to train the Decoder separately without affecting the Encoder (using torch.no_grad() or .detach() etc). This might help to reconstruct the new images (approximately).

Thanks !

If I gonna train decoder independently, what loss should I use ?
I don’t want to use reconstruction loss.

As I am not sure what do you want to achieve, its difficult to comment on this.
May I ask why do you have reservations to use reconstruction loss? Did you find any issues with that?

I just want to visualize output of pre-trained encoder.
If I use reconstruction loss, encoder is trained one more time. so I don’t want to train encoder.

+) Can I visualize output of encoder without decoder?