I’m using a loss which is a sum of multiple losses.
While for the actual training I can work with the sum only, I want to log the values of each loss in every iteration.
Naively, I would call log(loss1.item()) and then log(loss2.item()) but to my understanding each call to item() results in a cuda sync which I want to avoid as much as possible.
Am I right to be worried? What is usually done is such situations? Should I concatenate the losses (on the GPU), call item() once on a vector containing all of them and then split them back on the CPU? Should I avoid logging them so often in the first place?