When does 'inf' exactly show on?

Hello.
I found some different result of calculation between torch and raw In python

>>> a=torch.tensor(888)
>>> torch.exp(a/10)
tensor(inf)

>>> b=888
>>> math.exp(b/10)
3.6757840844711625e+38

I thought it is because of overflow of criterion in pytorch.
Why does it show different between them??
When we exactly see that overflow in pytorch?
In pytorch, Even if we allocate tensor in cuda, we also had same result.
Thanks in advance.

I think what you actually did is

>>> a = torch.Tensor([888]) # torch.Tensor(888) create a tensor has 888 elements
>>> torch.exp(a / 10)
tensor([inf])

The difference between two cases are came from the type of data.
PyTorch creates 32bit float tensor by calling torch.tensor

Meanwhile, if you install the 64-bit version of Python, Python handles 64-bit float data.

exp(88.8) exceeds the upper limit of 32-bit expression but does not exceeds 64-bit expression.
If you want handle that number in PyTorch, use DoubleTensor instead.

>>> a = torch.DoubleTensor([888])
>>> torch.exp(a/10)
tensor([3.6758e+38], dtype=torch.float64)

See docs torch.Tensor — PyTorch 1.12 documentation

@thecho7
Aha! Thank you for your kind explanation.
What if we want to speed up calculation using 32bit than 64bit.
I think we may get into speed-exact dilemma. what do you think which one is better?

Many of recent works using 16-bit tensor but not 64-bit.
Using 64-bit is not a good option in the respect of speed and accuracy.

I recommend you to use 32-bit.

@thecho7
Really thanks!!