I’m completely new to PyTorch, so apologies in advance if this is very silly.
My goal is to post-process (evolutions of) probabilistic forecasts for which the outcome is known. E.g. I have forecasts of the form
[.5, .6, .8] and the information that this forecast resolved positively, i.e.
resolution=1. (In this example, the forecast started off as uncertain with 50% and converged towards 80%, so not too bad.)
The input to my network is a probability
x_1 and more relevant information
x_2,x_3,... accessible at the time this forecast was made. Its output should be a different probability
y_hat. The loss function I’m interested in is
y_true*log(y_hat) + (1-y_true)*log(1-y_hat).
y_true is either 0 or 1, this can be seen as the Kullback-Leibler divergence loss if we consider not only the KL distance between
y_hat, but also add the KL distance between
(I’d also like to weigh these contributions to the log score by the time elapsed between to consecutive features, but unlike other losses, KLDivLoss — PyTorch 1.11.0 documentation doesn’t seem to take a
Is there a neat way to get the loss function I described above by somehow mirroring output data or do I have to write it “from scratch” myself?