I mean, in the last code I wrote I just simply sum all the values. If that sum were negative, the output values would have the sign inverted wrt originals.
>>> import math
>>> z = [-1.0, -1.0, -3.0, -4.0, 1.0, 2.0, 3.0]
>>> z_exp = [float(i) for i in z]
>>> print([round(i, 2) for i in z_exp])
>>> sum_z_exp = -sum(z_exp)
>>> print(round(sum_z_exp, 2))
>>> softmax = [round(i / sum_z_exp, 3) for i in z_exp]
>>> print(softmax)
[-1.0, -1.0, -3.0, -4.0, 1.0, 2.0, 3.0]
3.0
[-0.333, -0.333, -1.0, -1.333, 0.333, 0.667, 1.0]
sum(softmax) = -1 and original sign is preserved
However, if you dont pick the absolute value
>>> import math
>>> z = [-1.0, -1.0, -3.0, -4.0, 1.0, 2.0, 3.0]
>>> z_exp = [float(i) for i in z]
>>> print([round(i, 2) for i in z_exp])
>>> sum_z_exp = sum(z_exp)
>>> print(round(sum_z_exp, 2))
>>> softmax = [round(i / sum_z_exp, 3) for i in z_exp]
>>> print(softmax)
[-1.0, -1.0, -3.0, -4.0, 1.0, 2.0, 3.0]
-3.0
[0.333, 0.333, 1.0, 1.333, -0.333, -0.667, -1.0]
They sum 1, but the sign is inverted.
Besides, as you divide by the sum, if sum=0 values tend to inf. Sum = 0 means values already sum 1.
Lastly, it’s an ill-posed problem because as you apply no constrains in the result you can achieve that easily. For example just setting everything to zero but one of the values.
Or another example, you dont like softmax because it does not preserve the sign, however, weights sum up to 1. There are lot of ways of making them sum 1, but you didn’t say which properties you want this transformation to have.
In this one, weights are proportional to the original values, softmax version
>>> import math
>>> z = [-1.0, -2.0, -3.0, 4.0, 1.0, 2.0, 3.0]
>>> z_exp = [math.exp(i) for i in z]
>>> print([round(i, 2) for i in z_exp])
>>> sum_z_exp = sum(z_exp)
>>> print(round(sum_z_exp, 2))
>>> softmax = [round(i / sum_z_exp, 3) for i in z_exp]
>>> print(softmax)
[0.37, 0.14, 0.05, 54.6, 2.72, 7.39, 20.09]
85.34
[0.004, 0.002, 0.001, 0.64, 0.032, 0.087, 0.235]
Penalizes negative values.