In my codes, I have used torch.clamp as follows:
epsilon = 1e-6
ypred = torch.clamp(ypred, epsilon, 1-epsilon)
and got error message as follows:
Error: Function ‘ClampBackward’ returned nan values in its 0th output.
I have no idea what the problem is. Any suggestions?
Non-finite value is introduced somewhere later.
Following gives the same error; Inf values are not considered anomalous for some reason…
with torch.autograd.set_detect_anomaly(True):
torch.ones(1).requires_grad_().clamp(0.01, 0.99).div(0).backward()
1 Like
Thanks. Your reply is very helpful.
fengqh
(fengqh)
July 29, 2021, 6:00pm
4
how to solve this problem, please?
it is a generic numerical problem, topic’s case was about Inf values produced in forward
fengqh
(fengqh)
July 30, 2021, 10:09am
6
i download codes from github, i check code and do not find nan, how to deal with this problem, thank you
fengqh
(fengqh)
July 30, 2021, 10:10am
7
i check codes and do not find div zero
use a debugger, ensure that your loss (forward output) contains non-finite values (perhaps at some epoch > 1), re-run forward() step-by-step to find the problem (you can use conditional breakpoints and “jump to cursor” in pycharm’s debugger). Sorry, I cant think of an easier way.
fengqh
(fengqh)
July 30, 2021, 11:10am
9
i meet this problem at about 20 epoch, debug may need one day