Hi,

I’m trying to implement the log1mexp function (basically log(1 - exp(-x) ) ) which computes the value accurately. This is basically eq(7) in the following note: https://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf.

For my application, exp(-x) can be very close to 1 at times, so a numerically stable implementation is necessary. I’m thinking of how to implement this in pytorch and am confused between 3 alternatives: The first is to just compose this with existing pytorch functions, and wrap that in a python def. The second is to extend autograd.Function, and define my own forward and backward procedures. The third would be to write some C code, and interface that with pytorch.

Which one is more suitable? Thanks for the help.