How do we implement a custom loss that backpropagates with PyTorch?

In a neural network code written in PyTorch, we have defined and used this custom loss:

def my_loss(output, target):
    global classes    
 
    v = torch.empty(batchSize)
    xi = torch.empty(batchSize)
    
    for j in range(0, batchSize):
        v[j] = 0
        for k in range(0, len(classes)):
            v[j] += torch.exp(output[j][k]) 

    for j in range(0, batchSize):
        xi[j] = -torch.log( torch.exp( output[j][target[j]] ) / v[j] )
    
    loss = torch.mean(xi)
    print(loss)
    loss.requires_grad = True
    return loss

but it doesn’t converge to accetable accuracies.