About CrossEntropyLoss

During the VAE training,
I was comparing loss between (Input pic - Reconstructed pic)
and loss between (Input pic - Input pic).

Even though picture reconstruction is not done, I found out Loss between (Input pic - Reconstructed pic) will be smaller than the loss between (Input pic - Input pic) during the training.

Does anyone know why this happens?
As the title shows, I’m using ‘only’ CrossEntropyLoss as a loss function.
(I’m not using KLdivergence )

Thank you.

I would assume the reconstructed image to have the same shape and type as the input image.
However, if you are using nn.CrossEntropyLoss, the target should be a LongTensor containing some class indices.
Could you explain your use case a bit or post a (small) executable code snippet?

@ptrblck

Thank you for your reply.
I think I solved this problem by checking out the document.

What I was doing was, giving softmax activation to the output of the VAE model even though
nn.CrossEntropyLoss consist of nn.LogSoftmax() and nn.NLLLoss().

Thank you anyway