Why does my Pytorch tensor size change and contain NaNs after some batches?

I am Training a Pytorch model. After some time, even if on shuffle, the model contains, besides a few finite tensorrows only NaN values:

tensor([[[    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         ...,
         [ 1.4641,  0.0360, -1.1528,  ..., -2.3592, -2.6310,  6.3893],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan]]],
       device='cuda:0', grad_fn=<AddBackward0>)

The detect_anomaly functions return:

RuntimeError: Function ‘LogSoftmaxBackward’ returned nan values in its 0th output.

in reference to the next line output = F.log_softmax(output, dim=2)

A normal tensor should look like this:

tensor([[[-3.3904, -3.4340, -3.3703,  ..., -3.3613, -3.5098, -3.4344]],

        [[-3.3760, -3.2948, -3.2673,  ..., -3.4039, -3.3827, -3.3919]],

        [[-3.3857, -3.3358, -3.3901,  ..., -3.4686, -3.4749, -3.3826]],

        ...,

        [[-3.3568, -3.3502, -3.4416,  ..., -3.4463, -3.4921, -3.3769]],

        [[-3.4379, -3.3508, -3.3610,  ..., -3.3707, -3.4030, -3.4244]],

        [[-3.3919, -3.4513, -3.3565,  ..., -3.2714, -3.3984, -3.3643]]],
       device='cuda:0', grad_fn=<TransposeBackward0>)

Please notice the double brackets, if they are import.

Code:

spectrograms, labels, input_lengths, label_lengths = _data
spectrograms, labels = spectrograms.to(device), labels.to(device)
optimizer.zero_grad()

output = model(spectrograms)

Additionally, I tried to run it with a bigger batch size (current batch size:1, bigger batch size: 6) and it run without errors until 40% of the first epoch in which I got this error.

Cuda run out of memory

Also, I tried to normalize the data torchaudio.transforms.MelSpectrogram(sample_rate=16000, n_mels=128, normalized=True)

And reducing the learning rate from 5e-4 to 5e-5 did not help either.

Additional information: My dataset contains nearly 300000 .wav files and the error came at 3-10% runtime in the first epoch.

I appreciate any hints and I will gladly submit further information.

Also asked on Stackoverflow: python - Why does my Pytorch tensor size change and contain NaNs after some batches? - Stack Overflow

Something was wrong with my labels. See Stackoverflow for more