Trouble with a custom autograd function

Hello,

I am trying to make a custom loss function based on the probability mass function of a negative binomial distribution.
The parameters total_count and probs are a size (2,1) tensor. It comes as a product of the previous layers of my model in the forward pass.

Using these parameters to define the the negative binomial distribution, I want to take the negative log of it’s probability mass function as loss. I computed the gradient of this function wrt. total_count and probs but I can’t make the backward method work.
So far I have this:

class NBLL_loss(Function):
@staticmethod
def forward(ctx, input, target):
    tc, prb = input
    n_b=NegativeBinomial(total_count=tc, probs=prb)
    loss=-n_b.log_prob(target)        
    ctx.save_for_backward(loss,tc,prb,target)
    return loss

@staticmethod
def backward(ctx, grad_loss):
    loss,tc,prb,target, = ctx.saved_tensors
    dg=digamma()
    grad_loss[0] = -dg(target+tc)+dg(tc)-np.log(1-prb)
    grad_loss[1] = -target*(1/prb)+tc*(1/(1-prb))
    return grad_loss, None

I’m trying it out with the following lines:

loss=NBLL_loss.apply(torch.tensor([[5.],[0.5]], requires_grad=True),torch.tensor([5.]))
loss.backward()

Forward method is working fine but I’m getting “RuntimeError: No grad accumulator for a saved leaf!” on the backward call.

Can someone help me understand what am I doing wrong?

Thank you,