Autograd.detect_anomaly() in CNN

Hi, I am using autograd.detect_anomaly() on my CNN and getting following output and I didn’t get it.

/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 97 Variable._execution_engine.run_backward( 98 tensors, grad_tensors, retain_graph, create_graph, —> 99 allow_unreachable=True) # allow_unreachable flag 100 101

RuntimeError: Function ‘PowBackward0’ returned nan values in its 0th output.

You get this output because one of the pow (power) operation returned nan values. Since you usually don’t want nan value, this is most likely a problem somewhere.