In training process, can I visualize the output of encoder without decoder?
Assuming you are working with an autoencoder-like model, you could use a forward hook to get the bottleneck activation or (depending on your
forward implementation) just call the encoder part of your model.
To visualize this activation you could use a technique like t-SNE to get a low dimensional representation.
Could you share some code of your model, as this would make a concrete suggestions easier?