Hi Ximeng!
sigmoid (x)
, which is differentiable, moves “softly” from 0
to 1
as x
moves from negative to positive.
You may shift where the transition occurs, sigmoid (x - shift)
, and
sharpen the transition, sigmoid (sharpness * x)
.
Consider:
>>> import torch
>>> torch.__version__
'1.12.0'
>>> _ = torch.manual_seed (2022)
>>> x = torch.randn (5, 8, requires_grad = True)
>>> x
tensor([[-0.9788, -1.5154, -0.8222, 0.1214, 0.0716, -0.0872, -0.0253, -1.6267],
[ 0.2230, -1.6746, -1.4725, 0.9721, -0.2191, -0.9397, -1.7756, -0.6259],
[-1.1104, 1.1890, 1.3730, 0.4915, 0.3579, -0.1685, -0.8579, -1.0574],
[ 0.2105, 1.9045, 1.8237, 1.5122, -0.3140, -0.0810, -1.3631, -0.0701],
[-1.1876, -1.0787, 0.9551, -0.2958, 1.0663, -0.5134, -0.3846, -1.1481]],
requires_grad=True)
>>> (x > 0).sum()
tensor(14)
>>> torch.sigmoid (x)
tensor([[0.2731, 0.1801, 0.3053, 0.5303, 0.5179, 0.4782, 0.4937, 0.1643],
[0.5555, 0.1578, 0.1866, 0.7255, 0.4454, 0.2810, 0.1448, 0.3484],
[0.2478, 0.7666, 0.7979, 0.6204, 0.5885, 0.4580, 0.2978, 0.2578],
[0.5524, 0.8704, 0.8610, 0.8194, 0.4221, 0.4798, 0.2037, 0.4825],
[0.2337, 0.2538, 0.7221, 0.4266, 0.7439, 0.3744, 0.4050, 0.2408]],
grad_fn=<SigmoidBackward0>)
>>> torch.sigmoid (x).sum()
tensor(17.9145, grad_fn=<SumBackward0>)
>>> torch.sigmoid (10 * x).sum()
tensor(14.9508, grad_fn=<SumBackward0>)
>>> torch.sigmoid (100 * x).sum()
tensor(14.0741, grad_fn=<SumBackward0>)
Best.
K. Frank