Hi all,

I am trying to setup a multi-regression problem. Essentially, I have an input of 3 values: [2, 3, 4] and 2 outputs (ground truth/labels): [ 9, 24]. The first label is the sum, the second label is the product of all the inputs.

I am trying to setup a loss function so that it takes the MSE of the two labels seperately and adds them.

I am not sure if this is the best approach to setting up the loss for this kind of problem, so any advice on how to create the loss how I’m thinking, or a better approach would be greatly appreciated!

P.S.

I have seen this on the forum somewhere:

```
def my_loss(output, target):
loss = torch.mean((output - target)**2)
return loss
```

Which is really close, however, how would I extract the individual labels and have their own torch.mean calls that I can then add and return result of?

You can create different functions.

```
def mse(output, target):
loss = torch.mean((output - target)**2)
return loss
```

```
def my_loss(outputs, targets):
loss_one = mse(outputs[0],targets[0])
loss_two = mse(outputs[1],targets[1])
return loss_one + loss_two
```

Then, In the training loop do something like this for averaging

```
loss_one = mse(outputs[0],targets[0]) #index the outputs and the targets, take average later
loss_two = mse(outputs[1],targets[1]) #index the outputs and the targets, take average later
total_loss = my_loss(outputs ,targets)
total_loss.backward()
```

That seems simple code wise but i have a few questions:

- Why are we soung the mse twice in the beginning and kever doing anything with them?
- How is it okay to split the output tensor into seperste values and still have it be okay for backward propogstion? The solution seems to be exactly what im looking for, but i fear there is something missing?

Thanks in advance!