# Combining multiple loss functions

I’m making a simple autoencoder on Mnist Digits
So the problem i’m facing is that i’m defining 3 different losses
and when i combine them the loss doesn’t decrease while using only one seems to work fine

``````epochs = 3
model_2.train()
for e in range(epochs):
loss_r = [0,0,0]
r = 0
images = train.view(-1,14*14)
labels_1 = train
labels_2 = test.view(-1,784)
x , x_en , x_dec = model_2(images)
loss_1 = criterion_1(x,labels_2)
loss_2 = criterion_2(x_en,labels_2)
loss_3 = criterion_3(x_dec,labels_1)

optimizer.step()
loss_r += loss_1.item()
loss_r += loss_2.item()
loss_r += loss_3.item()
if r%10 == 9 :
print("epoch = {} batch = {} final_loss={} aux_loss = {} classification_loss={}"
.format(e+1,r+1,loss_r/10,loss_r/10,loss_r/10))
loss_r = [0,0,0]

r+=1
``````

what seems to be the problem here

Hi,

I think you need to specifically define the combination.

For instance,

``````loss = loss_1 + loss_2 + loss_3
loss.backward()
``````

I have never seen the approach you haved used to combine multiple losses so maybe the function you calling is not doing what you expected.

Bests,
Nik

1 Like

Both approaches should result in accumulated gradients.
Since the gradients are accumulated, you might need to lower the learning rate or scale the losses.

1 Like

You can use the same, if you are fine with its setup (e.g. `reduction`, `weight`, if passed, etc.).