The loss will be Nan when I use loss function defined by torch.nn.function.mse_loss

The loss is always Nan when I use the loss function as follow:

def Myloss1(source, target):
    loss = torch.nn.functional.mse_loss(source, target, reduction="none")
    return torch.sum(loss).sqrt()

...

loss = Myloss1(s, t)
loss.backward()


But when I use the following loss function, the training becomes normal:

def Myloss2(source, target):
    diff = target - source
    loss = torch.norm(diff)
    return loss
...

loss = Myloss2(s, t)
loss.backward()


Why can’t use the ‘Myloss1’ to train? Aren’t Myloss1 and Myloss2 equivalent?

Please help me,thank you very much!