Hello fellow Pytorchers!
I am working on a CNN to make directional predictions of a market based on a visual representation of the recent market state. The model can make three classifications, down, zero and up. I use the torch.nn.functional.cross_entropy function to determine the loss, which does its job.
However, for practical purposes I want to specifically optimize the precision score of the up and down predictions. To do this, I need to penalize up-predictions that are actually down harder than misclassified zero-predictions for example. I have an evenly distributed set of labels and I know I can apply loss_weight to force more zero-predictions, which slightly improves the precision scores for the up and down predictions.
However, I am looking to apply a weight matrix instead of a vector to more specifically dictate loss, looking to further improve precision. Example of such a matrix would be the following:
Is there any built-in functionality for this in Pytorch, or does this require me to do tinkering ‘under the hood’?