I defined a loss function, it can run normally. however, it produce bad results, while nll can produce accurate results in the same net work. So any one can take look at my code is right?

class CrpsLossExt(Function):
def forward(outEjectF, targetEjectF,nClass):

targetEjectF = targetEjectF.cpu().numpy()
predEjectF = outEjectF.cpu().numpy()
# following sentences are not torch function, they are from python,numpy or SCI functions
predictCdf = utils_heart.real_to_cdf(predEjectF,nClass)
targetHeavySide = utils_heart.heaviside_function(targetEjectF,nClass)
crplLoss = utils_heart.crps(predictCdf,targetHeavySide)
tensorCrplLoss = torch.from_numpy(np.array(crplLoss))
tensorCrplLoss = tensorCrplLoss.requires_grad_()
return tensorCrplLoss
def backward(grad_output):
return grad_output

Please take a look at the "Extending torch.autograd" section
of Extending PyTorch.

Once you move out of pytorch and into something like numpy,
autograd can no longer track your gradients, so you (your loss
function) will have to do it yourself. That’s what backward is for.

Your backward doesn’t do anything (except return its input as
its output). So the gradients that get passed to the optimizer don’t
know anything about the structure of your loss function. That is,
your gradients are incorrect, so the optimizer won’t be moving
your weights to a lower loss.

You either need to rewrite your loss function using pytorch Tensor functions so that autograd can track and calculate
the gradients automatically for you, or you have do calculus
on your loss function to get its gradients and implement them
by hand in your backward function.

thanks a lot, K. Frank
If I define loss function in another way which will be inherited from nn.Module, must I use torch functions in all forward steps?

You could try it out and see. Pass a requires_grad = True tensor
through your loss function and see if .backward() gives the correct
result. Something like: