I am currently experimenting with an approach of using a single Unet network for 2 predictions for image segmentation task. The model output is
[32, 7, 256, 256] to be matched with pathology and lunglobes grondtruth i.e
[32, [0,1], 256, 256] --> [32, 2, 256, 256] and
[32, [2,3,4,5,6], 256, 256] --> [32, 5, 256, 256]. However, I am stuck on efficiently backpropagating individual losses i,e loss1.backward() and loss2.backward() and getting the evaluation metric after each epoch.
Your suggestions are well appreciated
One possibility is you could add the two losses together. This would save you computation and time so you do not have to perform two backwards passes. And then for the evaluation metric you can just print the two losses individually.
I appreciate your input @Dwight_Foster.
This is currently the approach I am adopting (as discussed in some threads) but I could not figure the understanding of using the computed parameters (weights and bias) for individual predictions in the next epoch.
I am confused what you mean. Does something change in the next epoch? The next epoch the same thing should happen. Your weights and biases will be updated from the last step and should work in the next one. Is there something I am missing?
Based on my understanding, after your forward pass (for the 1st iteration), the differentiable loss makes updated weights and bias on backward pass that will be used on in the 2nd iteration and the process follows till number of iterations that makes an epoch.
In this case - a single network for 2 predictions with different weighted loss functions, how will the network keep associating parameters during iterations for an epoch for each predictions that enables it master what has been learnt in previous iterations (or epochs)?