I found this wired inconsistencies in pytorch:
a = torch.ones(1) b = a+0.1 b # b here is 1.100000023841858 b.numpy() # 1.1 c = 10000 d = c+0.1 d # d is 1000.0999755859375 d.numpy() # 1000.1
This error between x and it should be gets larger and larger as x goes up. And this seems to make the gradient check function
get_numerical_jacobian() especially unstable when dealing with large inputs
Can anyone explain why is this the case?