NaN values popping up during loss.backward()

I’m using CrossEntropyLoss with a batch size of 4. These are the predicted/actual labels I’m feeding to it along with the value of the loss:

 tensor([[-0.0052,  0.2059, -0.1473],
        [-0.0250,  0.0953,  0.0047],
        [ 0.0684,  0.1638, -0.0705],
        [-0.0195,  0.0100, -0.0874]], device='cuda:0', grad_fn=<AddmmBackward>)
 tensor([2, 2, 2, 2], device='cuda:0')
loss: tensor(1.1942, device='cuda:0', grad_fn=<NllLossBackward>)

Here is the error message I’m getting after setting autograd.set_detect_anomaly(True):

Traceback (most recent call last):
  File "", line 247, in <module>
  File "", line 200, in main
    train(model, train_loader, optimizer, device, epoch, 'train', debug_mode)
  File "", line 80, in train
  File "/home/jlko/miniconda3/envs/liuNetEnv/lib/python3.6/site-packages/torch/", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/jlko/miniconda3/envs/liuNetEnv/lib/python3.6/site-packages/torch/autograd/", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Function 'NativeBatchNormBackward' returned nan values in its 0th output.

Here is the architecture of my neural network: I am only using the self.conv and self.fc modules, so you can ignore all of the stuff related to self.age_encoder.

Did you make sure that no inputs contain invalid values, e.g. by applying torch.isfinite(input)?
Are you seeing an invalid output (in the model output or loss) before anomaly detection raises the error?

Hi Ptrblock,

I met a similar problem as him.
I checked all the inputs are finite but the error persists.


Could you post a minimal, executable code snippet which would reproduce this issue, please?