Remove part of loss function leads to out of memory Error


The loss function of my model has more than one sub-loss which are combined with several factors.

loss = factor_1 * loss_1 + factor_2 * loss_2 + factor_3 * loss_3

sometimes for debugging my model, I need to use only one sub-loss like:
loss = factor_3 * loss_3

but this will leads to an OOM error.

I have to use: loss = 0 * loss_1 + 0 * loss_2 + factor_3 * loss_3

is there any more elegant method to deal with it?


Are you storing the losses somewhere?
It’s strange, that using 0 * tensor works, while omitting it throws an OOM error.
Could you post a reproducible code snippet so that we could have a look?