# How to difine self defined loss?

I defined a loss function, it can run normally. however, it produce bad results, while nll can produce accurate results in the same net work. So any one can take look at my code is right?

class CrpsLossExt(Function):
def forward(outEjectF, targetEjectF,nClass):

``````    targetEjectF = targetEjectF.cpu().numpy()
predEjectF = outEjectF.cpu().numpy()

# following sentences are not torch function, they are from python,numpy or SCI functions
predictCdf = utils_heart.real_to_cdf(predEjectF,nClass)
targetHeavySide = utils_heart.heaviside_function(targetEjectF,nClass)
crplLoss = utils_heart.crps(predictCdf,targetHeavySide)
tensorCrplLoss = torch.from_numpy(np.array(crplLoss))

return tensorCrplLoss

Hi lzh21cen!

Please take a look at the "Extending `torch.autograd`" section
of Extending PyTorch.

Once you move out of pytorch and into something like numpy,
function) will have to do it yourself. That’s what `backward` is for.

Your `backward` doesn’t do anything (except return its input as
its output). So the gradients that get passed to the optimizer don’t
your weights to a lower loss.

You either need to rewrite your loss function using pytorch
`Tensor` functions so that autograd can track and calculate
the gradients automatically for you, or you have do calculus
by hand in your `backward` function.

Good luck.

K. Frank

thanks a lot, K. Frank
If I define loss function in another way which will be inherited from nn.Module, must I use torch functions in all forward steps?

class CrpsLossModule(nn.Module):
def init(self, nClass, reduce=True):
super(CrpsLossModule, self).init()
self.nClass = nClass
self.reduce = reduce

``````def forward(self, outEjectF, targetEjectF):

# forward code here

if self.reduce:
else:
return F_loss``````

Hello lzh21cen!

You could try it out and see. Pass a `requires_grad = True` tensor
through your loss function and see if `.backward()` gives the correct
result. Something like:

``````input = torch.randn ((2, 5), requires_grad=True)
target = ...
loss = my_custom_loss_function (input, target)
loss.backward()
(As a side note, I have no idea what `F_loss` means here, so I have