I’d like to avoid NaNs in division by evaluating (x + eps) / (y + eps)
for a type-dependent number eps
. Does torch provide an equivalent to C’s DBL_MIN
/FLT_MIN
or C++'s std::numeric_limits<T>::min
or Numpy’s numpy.finfo(...).tiny
? It would ideally be a method of each Tensor class like Tensor.finfo.tiny
.
1 Like
PyTorch doesn’t have such functionality yet, but we use standard floating point tensors. You can use np methods to get the minimum, or easily compute yourself.
I just created a PR to get the numpy.dtype
for torch.Tensor
s: https://github.com/pytorch/pytorch/pull/4256
1 Like