Trouble with NaN

I am trying to train an autoencoder and I using torch.nn.MSELoss() as the loss function. I keep getting this error below:

RuntimeError                              Traceback (most recent call last)
<ipython-input-76-67910671b729> in <module>()
     20 
     21         optimizer.zero_grad()
---> 22         loss.backward()
     23         optimizer.step()
     24 
RuntimeError: Function 'MseLossBackward' returned nan values in its 0th output.

I have tried reading many of the other posts on this topic and I have tried various approaches such as adjusting the learning rate, gradient clipping. The weird thing is the loss doesn’t seem to diverge at all. I am not totally sure why this error keeps occurring for me, any suggestions or help would be appreciated? I am very new to pytorch.

Could you check if you have any nan’s in your input? And if there aren’t any in it, could you check if the outputs are producing nan’s?