How to difine self defined loss?

I defined a loss function, it can run normally. however, it produce bad results, while nll can produce accurate results in the same net work. So any one can take look at my code is right?

class CrpsLossExt(Function):
def forward(outEjectF, targetEjectF,nClass):

    targetEjectF = targetEjectF.cpu().numpy()
    predEjectF = outEjectF.cpu().numpy()
   # following sentences are not torch function, they are from python,numpy or SCI functions
    predictCdf = utils_heart.real_to_cdf(predEjectF,nClass)
    targetHeavySide = utils_heart.heaviside_function(targetEjectF,nClass)
    crplLoss = utils_heart.crps(predictCdf,targetHeavySide)
    tensorCrplLoss = torch.from_numpy(np.array(crplLoss))
    tensorCrplLoss = tensorCrplLoss.requires_grad_()       
    return tensorCrplLoss   

def backward(grad_output): 
    return grad_output

Hi lzh21cen!

Please take a look at the "Extending torch.autograd" section
of Extending PyTorch.

Once you move out of pytorch and into something like numpy,
autograd can no longer track your gradients, so you (your loss
function) will have to do it yourself. That’s what backward is for.

Your backward doesn’t do anything (except return its input as
its output). So the gradients that get passed to the optimizer don’t
know anything about the structure of your loss function. That is,
your gradients are incorrect, so the optimizer won’t be moving
your weights to a lower loss.

You either need to rewrite your loss function using pytorch
Tensor functions so that autograd can track and calculate
the gradients automatically for you, or you have do calculus
on your loss function to get its gradients and implement them
by hand in your backward function.

Good luck.

K. Frank

thanks a lot, K. Frank
If I define loss function in another way which will be inherited from nn.Module, must I use torch functions in all forward steps?

class CrpsLossModule(nn.Module):
def init(self, nClass, reduce=True):
super(CrpsLossModule, self).init()
self.nClass = nClass
self.reduce = reduce

def forward(self, outEjectF, targetEjectF):
    # forward code here

    if self.reduce:
        return torch.mean(F_loss)
        return F_loss

Hello lzh21cen!

You could try it out and see. Pass a requires_grad = True tensor
through your loss function and see if .backward() gives the correct
result. Something like:

input = torch.randn ((2, 5), requires_grad=True)
target = ...
loss = my_custom_loss_function (input, target)
print (input.grad)

(As a side note, I have no idea what F_loss means here, so I have
no idea whether this code makes sense or would work the way you


K. Frank

Thanks, your answers are extremely helpful! I think I had to look in detail the documentation.