Custom loss function RuntimeError tensor does not have a grad_fn

Trying to utilize a custom loss function and getting error ‘RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn’. Error occurs during loss.backward()

I’m aware that all computations must be done in tensors with ‘require_grad = True’. I’m having trouble implementing that as my code requires a nested for loop. Is there a way to create an empty tensor and append it? Below is my code.

def Gaussian_Kernal(x, mu, sigma):
  p = (1./(math.sqrt(2. * math.pi * (sigma**2)))) * torch.exp((-1.) * (((Variable(x)**2) - mu)/(2. * (sigma**2))))
  return p

class MEE(torch.nn.Module):
  def __init__(self):
    super(MEE,self).__init__()

  def forward(self,output, target, mu, variance):

    error = torch.subtract(Variable(output),Variable(target))
  
    error_diff = []
    for i in range(0, error.size(0)):
      for j in range(0, error.size(0)):
        error_diff.append(error[i] - error[j])

    error_diff = torch.cat(error_diff)
    torch.tensor(error_diff,requires_grad=True)

    loss = (1./(target.size(0)**2)) * torch.sum(Gaussian_Kernal(Variable(error_diff), mu, variance*(2**0.5)))

    loss = Variable(loss)

    return loss

You are breaking the computation graph by recreating tensors and the deprecated Variables.
To keep the computation graph intact you would need to apply operations on tensors only, so remove:

... Variable(output),Variable(target)
torch.tensor(error_diff,requires_grad=True)
loss = Variable(loss)

etc.