Hi All,
I am using the custom loss function:
loss = torch.mean(torch.square(torch.sqrt(y_true + 1e-10) - torch.sqrt(y_predict + 1e-10)) + 10*torch.square(torch.square(torch.sqrt(y_true + 1e-10) - torch.sqrt(y_predict + 1e-10))))
After some iteration, I am getting below error.
[W python_anomaly_mode.cpp:60] Warning: Error detected in PowBackward0. Traceback of forward call that caused the error:
File “main.py”, line 38, in
main()
File “main.py”, line 34, in main
train(dataloader_train=train_dl, dataloader_eval=valid_dl, model=model, hyper_params=train_params, device=‘cuda’)
File “train_model.py”, line 81, in train
loss = my_cost(outputs,labels)
File “train_model.py”, line 15, in my_cost
loss = torch.mean(torch.square(torch.sqrt(y_true + 1e-10) - torch.sqrt(y_predict + 1e-10)) + 10*torch.square(torch.square(torch.sqrt(y_true + 1e-10) - torch.sqrt(y_predict + 1e-10))))
(function print_stack)
Traceback (most recent call last):
File “main.py”, line 38, in
main()
File “main.py”, line 34, in main
train(dataloader_train=train_dl, dataloader_eval=valid_dl, model=model, hyper_params=train_params, device=‘cuda’)
File “train_model.py”, line 83, in train
loss.backward()
File “/usr/local/lib/python3.6/dist-packages/torch/tensor.py”, line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py”, line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function ‘PowBackward0’ returned nan values in its 0th output.
My final output is after relu activation, so I am sending only +ve values to the sqrt function