I am building an autoencoder, and I would like to take the latent layer for a regression task (with 2 hidden layers and one output layer).

This means that I have two loss functions, one for the AE and one for the regression. I have a few questions:

Do you suggest adding up both loss values and backprop?

If I want to backprop each model with respect to its own loss value, how should I implement that in pytorch? should I call loss1.backward() and loss2.backward() then opt.step() ?

Then, backpropagate regression loss on those layers. You just have to detach regression loss not to backprop it farther than it is necessary. 2 backwards should be ok