Sigmoid on a tensor returns [1.0 ...]

I did not expected the below output; as I thought that sigmoid would move the values of a tensor to be between 0 and 1?

Perhaps its related to handling finite precision of floats; but I didn’t manage to track down the relevant docs.

print(f"sig {torch.tensor([10.0,11.0,12.0]).sigmoid()}")
> sig tensor([1.0000, 1.0000, 1.0000])

Yep. It just rounds them to 1. But it is in valid range, i.e. between 0 and 1. I also think that it just happens during printing.

val = torch.tensor([10.0,11.0,12.0]).sigmoid()
for i in range(val.shape[0]):
    print("{:.10}".format(val[i]))

Output:

0.9999545813
0.9999833107
0.9999938011

Btw, torch.set_printoptions(precision=5) function might help to view the output more nicely.

1 Like