Specify Outputs to Include in Loss Function

I was looking into a way to dynamically change the loss function of a net to only evaluate loss on my specified output nodes, and ignore the rest of the outputs of a network. Initially, I was planning on writing a custom loss function for calculating loss only on outputs that I specify, however it seems that the loss functions are all implemented in C to improve speed, etc. I can still go down this route but it would involve recompiling PyTorch, on every machine I want to train on with this customized loss function, and the whole things seems to be a lot of work. Would a better approach be to use some kind of tensor manipulation to achieve this same effect before putting my output and labels into a standard loss function? If so, how could I go about this?

You can .chunk() or .split() out the portion of the tensor youā€™re interested in calculating loss on. Post some code and we can dive deeper into suggested examples. Recompiling should not be necessary for your suggested use case.

1 Like