Stable and efficient implementation of logcumsumexp

Hello Artyom!

As it stands, it cannot, other than someone writing it.

You presumably understand the numerical issues with calculating
logsumexp. (See, for example, the discussion in Wikipedia’s
LogSumExp.)

So if pytorch had, for example, a cummax tensor function, you
could implement logcumsumexp using pytorch tensor functions.

But this doesn’t exist (yet). See:

https://stackoverflow.com/questions/55665624/vectorized-implementation-of-cumulative-maximum-in-pytorch-with-requires-grad-tr

https://github.com/pytorch/pytorch/issues/20240

and

https://discuss.pytorch.org/t/sliding-max-over-dimension/49799

So, short of writing the logcumsumexp (or related) tensor
function “from scratch,” you would have to use a loop to get
the “running maximum” (cummax) part, thus forgoing some
of the efficiency provided by using just tensor functions.

Good luck.

K. Frank