Ensuring numerical stability

How does pytorch ensure numerical stability for sigmoid? Python can handle upto approx 1.7e+709. How is this handled? I tried to go through the source code but couldn’t find the implementation for sigmoid.

Hi @Millon_Madhur_Das,

You can change the torch.dtype to torch.float64 for all operations via torch.set_default_dtype(torch.float64) (docs: torch.set_default_dtype — PyTorch 2.0 documentation)

If you want to see the source code for torch.Sigmoid, you’ll have to look for it on the GitHub repo (I’m sure someone will know the path to the file).

As a side note, you can always represent your problem within the log-domain and that’ll have a much better range than doing it within the linear-domain.