Implementing quantile loss

Hi,

I’m wondering if there is an established way to use quantile loss in Pytorch? I’d like to build a network that outputs several quantiles at the same time when making a prediction.

How would I go about it?

Thanks

check links in https://github.com/pytorch/pytorch/issues/38035. I haven’t used these, but experimented with “asymmetric laplace distribution” and “huber quantile loss” instead, the latter one has varying gradients instead of {-1,+1} and worked better from what I recall.

I’ve looked at it as well as the pytorch-forecasting implementation but I’m not sure I get it because there’s manipulation for the shapes which don’t make sense to me such as unsqueezing the result.

hmm, yes, it is tricky, the network should produce multiple ordered predictions, but you have only one observation for particular inputs (in usual case), so you unsqueeze the target. And as quantile specific gradients will still be different, prediction producing layer will eventually learn to output reasonable quantile grids, conditioned on input.

1 Like