without raising any warning or error. During the training, many time I end up with nan value; and have to debug all the way to figure when did the numerical issue occurs.
Is there anyway to fix it
I’m using pytorch 0.4.0 on Mac
(Both jupyter notebook and pycharm doesn’t give any numerical warning)
A dirty hack would be to use torch.clamp with the maximum possible value for your dtype
But I kind of prefer to raise the warning and error because in many cases; overflow encounter because of the bad implementation ; clamp the value might silence the potential bad implementation and leads to unexpected results even make it harder to debug later
Has this been addressed in pytorch? I would like to see such a feature as well. Takes too long to identify location of the overflow in code, especially in complex models.
Checking each value for an overflow could yield bad general performance.
To debug an invalid value, you could use