Maximizing Gaussian Likelihhood Using Pytorch

Hey I am trying to maximize Gaussian likelihood in PyTorch, Unfortunately getting very unreasonable results.

Start of the code

mu= Variable(torch.rand(1), requires_grad=True)
sigma= Variable(torch.rand(1), requires_grad=True)
learning_rate = 0.00002
for t in range(100000):
    NLL = -torch.sum(torch.log((1/(np.sqrt(2*3.14)*sigma))*torch.exp((x-mu)**2/(2*sigma**2))))

    if t % 1000 == 0:
        print("loglik  =",,"sigma",,"mu",
    -= learning_rate * -= learning_rate *
Not sure where the problem goes wrong? Do you think this can implemented and solved using techniques like adam optimizers?