Is that normal to train with 2 sets and/or 2 diff loss?

I have 2 train sets: one with label and one with no label.
When training, i’m simultaneously load one batch from labelled set, calculate loss in 1 way; one batch from unlabeled set, calculate loss in other way. Finally I sum them (2 loss) and loss.backward() .

Is this way ok ? it’s quite uncommon in my mind so just ask if the engine truthfully know how to back-propagate ?
Thank you.

Semi-supervised learning is a relatively common technique to deal with situations in which labeled data isn’t abundant and might look like this.
PyTorch’s autograd it is much the same if the loss has 1, 2, or 10 terms - losses of minibatches typically are just sums or means of the individual items under the hood, too.
If you have very separate computations, you could also forward with the first, backward, forward for the second and backward again. PyTorch will accumulate the gradients of the two (which is why you need to zero gradients), and it might be much easier on GPU memory / you can afford larger batch sizes.

Best regards

Thomas

1 Like