Softmax only return close-to-zero values

Hello,
I was training a modified GAN (One Class Adversarial Network) for 2 class classification, and I encountered a weird bug :
As I wanted to get probs from logits using softmax (I tried both using nn.functional.softmax and nn.Softmax), the output was just close to zero (and this was NOT caused by a wrong dim along which it was computed). The logits ranged from -20 to 20, which at first I thought could be a problem, but even for the small differences like -2 / 2 it output like 10^-5 and 10^-8. I then used a simple sigmoid on the first coordinate of the logits and it was all fine, but I wanted to discuss this issue and maybe why a robust function like softmax could have this problem. By the way, softmax was well executed at first (for a couple of hundred epochs), but then it suddenly output something wrong.

Hi Basile!

As you’ve described things, this can’t happen (unless I give a perverse
interpretation to your description).

Could you capture the issue numerically, and post a complete, runnable
script that reproduces the issue? Please also let us know what version
of pytorch you are running.

Best.

K. Frank