Regression with Multiple Outputs

I have a multiple input and multiple output (MIMO) regression problem. When I use the MSE loss function I see only one MSE. How is Pytorch calculating it ? Does it take the mean of MSE of all the outputs ?

I don’t know how you are passing multiple outputs to nn.MSELoss as only one output tensor and target tensor are expected:

output1 = torch.randn(10, 10, requires_grad=True)
output2 = torch.randn(10, 10, requires_grad=True)
target = torch.randn(10, 10)

# works
loss = criterion(output1, target)

# fails
loss = criterion(output1, output1, target)
# TypeError: forward() takes 3 positional arguments but 4 were given

Could you share your code showing how the loss is calculated?

Here you go :

Loss and optimizer

criterion = nn.MSELoss()

pred_output1= torch.tensor([[1.0003, 0.9998, 0.9928],
[0.9962, 0.9955, 0.9874],
[0.9944, 0.9940, 0.9851],
[0.9315, 0.9306, 0.9251],
[0.9407, 0.9401, 0.9375],
[0.9877, 0.9868, 0.9778]])
##torch.Size([6, 3]) —three outputs

target= torch.tensor([[1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000],
[0.9400, 0.9500, 0.9500],
[0.9500, 0.9500, 0.9500],
[0.9800, 0.9800, 0.9800]])

loss = criterion(pred_output1, target)

“print(loss) = tensor(0.0001)”

Thanks for the code. The loss will be computes elementwise as seen here:

criterion = nn.MSELoss()
pred_output1 = torch.randn(6, 3)
target = torch.randn(6, 3)

loss = criterion(pred_output1, target)
print(loss) 
# tensor(1.5746)

loss_manual = ((pred_output1 - target)**2).mean()
print(loss_manual)
# tensor(1.5746)

By default the mean reduction will be used.