Hi Daniel!

I think you are correct. I would call this *round-off error* (where,

numerically, `(1.0 + delta) - 1.0`

becomes exactly floating-point

zero somewhere around `delta = 1.e-16`

(for double precision)).

To me, *underflow* is where a very small `epsilon`

becomes exactly

floating-point zero somewhere around `epsilon = 1.e-324`

(for

double precision).

The problem is that for small `delta`

, `exp (delta) ~ 1.0 + delta`

,

so you get exactly this kind of round-off error.

Note that many math libraries, including pytorch, implement the

expm1() function to address this issue.

(I don’t think this helps with `Softmax`

or `LogSoftmax`

though, because

in this case you anyway end up with results of order 1.)

This (0.3.0) script illustrates the round-off error issue and the `expm1()`

function:

```
import torch
torch.__version__
import math
def expm1 (t): # not yet implemented in 0.3.0
res = torch.zeros_like (t)
for i in range (t.shape[0]):
res[i] = math.expm1 (t[i]) # double precision, then truncated, if FloatTensor
return res
z = torch.DoubleTensor ([1.e-15, 2.e-15, 3.e-15])
z_max = torch.max (z)
torch.set_printoptions (precision = 20)
expm1 (z) # correct to about 15 decimal digits
expm1 (z - z_max) # correct to about 15 decimal digits
expm1 (z.float()) # not exactly single precision
expm1 (z.float() - z_max) # not exactly single precision
torch.exp (z) - 1.0 # double precision (without expm1)
torch.exp (z - z_max) - 1.0 # double precision (without expm1)
torch.exp (z.float()) - 1.0 # single precision (without expm1)
torch.exp (z.float() - z_max) - 1.0 # single precision (without expm1)
```

Here is the output:

```
>>> import torch
>>> torch.__version__
'0.3.0b0+591e73e'
>>>
>>> import math
>>>
>>> def expm1 (t): # not yet implemented in 0.3.0
... res = torch.zeros_like (t)
... for i in range (t.shape[0]):
... res[i] = math.expm1 (t[i]) # double precision, then truncated, if FloatTensor
... return res
...
>>> z = torch.DoubleTensor ([1.e-15, 2.e-15, 3.e-15])
>>>
>>> z_max = torch.max (z)
>>>
>>> torch.set_printoptions (precision = 20)
>>>
>>> expm1 (z) # correct to about 15 decimal digits
1.00000e-15 *
1.00000000000000066613
2.00000000000000177636
3.00000000000000444089
[torch.DoubleTensor of size 3]
>>> expm1 (z - z_max) # correct to about 15 decimal digits
1.00000e-15 *
-1.99999999999999755751
-0.99999999999999900080
0.00000000000000000000
[torch.DoubleTensor of size 3]
>>>
>>> expm1 (z.float()) # not exactly single precision
1.00000e-15 *
1.00000000362749363880
2.00000000725498727761
2.99999990500336233268
[torch.FloatTensor of size 3]
>>> expm1 (z.float() - z_max) # not exactly single precision
1.00000e-15 *
-1.99999979549675055424
-0.99999989774837527712
0.00000000000000000000
[torch.FloatTensor of size 3]
>>>
>>> torch.exp (z) - 1.0 # double precision (without expm1)
1.00000e-15 *
1.11022302462515654042
1.99840144432528155072
3.10862446895043786910
[torch.DoubleTensor of size 3]
>>> torch.exp (z - z_max) - 1.0 # double precision (without expm1)
1.00000e-15 *
-1.99840144432528155072
-0.99920072216264077536
0.00000000000000000000
[torch.DoubleTensor of size 3]
```

Best.

K. Frank