I have 2 train sets: one with label and one with no label.

When training, i’m simultaneously load one batch from labelled set, calculate loss in 1 way; one batch from unlabeled set, calculate loss in other way. Finally I sum them (2 loss) and `loss.backward()`

.

Is this way ok ? it’s quite uncommon in my mind so just ask if the engine truthfully know how to back-propagate ?

Thank you.