Hello,
I witnessed a strange behavior recently using F.mse_loss
.
Here’s the test I ran:
import torch
import torch.nn as nn
import torch.nn.functional as F
layer = nn.Linear(1,3)
x = torch.rand(1,1)
label = torch.rand(1,3)
out = layer(x)
print('Input: {}\nLabel: {}\nResult: {}'.format(x, label, out))
loss_1 = F.mse_loss(out, label)
loss_2 = F.mse_loss(label, out)
print('Loss1: {}\nLoss2: {}'.format(loss_1, loss_2))
Output:
Input: tensor([[0.6389]])
Label: tensor([[0.9091, 0.5892, 0.8812]])
Result: tensor([[ 0.2329, -0.2419, -0.5444]], grad_fn=<ThAddmmBackward>)
Loss1: 1.060153603553772
Loss2: 3.1804609298706055
Am I missing something here ?
Thanks !