MSELoss - issues with reduction

I am trying to train a basic autoencoder(seq2seq) using LSTMs.
I’m using the MSELoss for loss calculation. The issue is that I am getting NaN values when I set reduction to mean or none. However, when reduction is set to sum, I’m getting expected large values.
I’m fairly new to pytorch, so could someone please explain this ? Is this an expected action or a bug ?
Please note:
→ reduction=none/mean gives proper loss values for a small dummy dataset. However, my real dataset(quite large-librispeech) gives NaN values.
–>I have normalized my dataset.
→ The data is padded with zeros, but I have used pack_padded_sequence () before feeding into the encoder.

I would like to understand more about this. Thank you.

Hi @lazypanda , can you share a minimal example to reproduce this?

Hi, thanks for your reply.
I am actually trying to run the following code for Librispeech:

I have written a small script to create a MFCC dataset from audio samples of Librispeech. I can share that with you if you need it. Please let me know. Hope this helps. Thanks.

Yes, please share the minimal working example (MWE) i.e. only the code required to reproduce this error on our end. Also, generally reducing your query to a MWE helps you implicitly debug your issue :wink: