Inverse of sigmoid in pytorch

Is there an inverse of sigmoid (logit from domain -1 to 1) in Pytorch. i.e. some inversion of the output of nn.Sigmoid()? The scipy logit function takes only 0 to 1 domain, and I’d like -1 to 1.

Thanks!

Hello,

note that the name sigmoid might mean different things to different groups of people.

Here, most commonly, sigmoid is sigmoid(x)= 1/(1+torch.exp(-x)), mapping the real line to (0,1), so the inverse logit(y) = torch.log(p/(1-p)) is defined on (0,1) only.

If you have renormalized sigmoid to -1+2/(1+torch.exp(-x)) to map to (-1, 1) you could use above logit with logit(1+0.5*y).

If you want the inverse of tanh, which is perhaps the most common mapping of the real line to (-1,1), you could code the inverse up yourself using torch.log, by using artanh(y) = 0.5*(torch.log(1+y)/(1-y)).

Best regards

Thomas

8 Likes

Thanks, Thomas. This was a helpful clarification of the torch sigmoid implementation.

Best,
Michael

@tom, do you know of any activation function whose inverse has output range (0, inf)? I am trying to generate values that are always greater than zero, so when inverting the output of the generated value, I don’t want the logit for example to map sigmoid outputs < 0.5 to negative numbers

Softplus probably is the most common if you don’t want ReLU. Or pick one from the rectifier zoo at wilipedia that does what you want (some are not positive, though).

Best regards

Thomas

You got the parenthesis of the log wrong, it should be:
artanh(y) = 0.5 * torch.log((1+y)/(1-y))

2 Likes

I know this is a bit necro, but … wouldnt a function whose inverse has output range (0, inf) mean that any input value less than 0 would be illegal? Wouldnt that destabilize the network some?

Alternatively, I guess taking the square sort of complies with this, if you always implicitly take the positive square root in the inverse direction, perhaps?

1 Like