Numerical stable and overflow problem with torch.exp compared with numpy.exp

I have this tensor/numpy array: aaa = torch.tensor([123, 456, 789]).float()

If using pytorch, the following code has result: > tensor([0., 0., 1.])

torch.exp(aaa-torch.max(aaa))/ torch.sum(torch.exp(aaa-torch.max(aaa)))

However, if i use exp function from numpy, the same code leads to the result:

array([5.75274406e-290, 2.39848787e-145, 1.00000000e+000])

How can i solve this problem when using pytorch.

torch float == float32, you have to explicitly use double() or torch.float64
there is also set_default_dtype global override