# Distribution backward grad inspection not matching numerical reproduction

Hello,

I have implemented a loss based on the Negative Binomial Distribution probability mass function.
I tried to validate this implementation by reproducing the gradients wrt. the distribution’s parameters.
Unfortunately the obtained values do not match. I am unsure if it is my loss function implementation that is at fault since I believe that the numerical reproduction is correct according to sources:
[http://vixra.org/pdf/1211.0113v1.pdf](source 1 - (Negative Binomial distribution PMF and likelihood gradient))
or
[https://en.wikipedia.org/wiki/Negative_binomial_distribution](source 2 - (Negative Binomial distribution PMF and likelihood gradient))

Here is the code snippet with an example and prints:

``````import torch
from scipy.special import digamma
import numpy as np

output_soft = torch.tensor([,[0.1]],requires_grad=True) # model output

def loss(input,target):
total_count = input
probability = input
target_p_tc_gamma = torch.tensor([target + total_count],dtype = torch.float,requires_grad=True).lgamma()
r_gamma = total_count.lgamma()
target_factorial = torch.tensor([target + 1], dtype = torch.float).lgamma()
combinatorial_term = torch.tensor([target_p_tc_gamma-r_gamma-target_factorial],dtype = torch.float,requires_grad = True).exp()
prob_term = probability.pow(target)
comp_prob_term = torch.tensor([1-probability],dtype = torch.float,requires_grad = True).pow(total_count)
likelihood_target = combinatorial_term*prob_term*comp_prob_term
return - likelihood_target.log()

target = torch.tensor([5.])
loss = loss(output_soft,target)
loss.backward()