Torch.tensor precision not same as numpy.array

In the torch tensor case below, why do s * pi and t * pi have different precision ?

#! /usr/bin/env python3
r'''
Expected Results:
   s * pi - t * pi as numpy arrays  =  [0.]
   s * pi - t * pi as torch tesors =  tensor([0.], dtype=torch.float64)

Actual Results:
   s * pi - t * pi as numpy arrays  =  [0.]
   s * pi - t * pi as torch tesors =  tensor([-8.7423e-08], dtype=torch.float64)
'''
import torch
import numpy
#
# pi
pi = numpy.pi
#
# s, t
s  = numpy.ones(1)
t  = numpy.array( [1.0] )
print( 's * pi - t * pi as numpy arrays = ', s * pi - t * pi )
#
# s, t
s  = torch.tensor( numpy.ones(1) )
t  = torch.tensor( [1.0] )
print( 's * pi - t * pi as torch tesors = ', s * pi - t * pi )

By default PyTorch uses float32 for performance reasons so the small difference is expected.
Use t = torch.tensor( [1.0], dtype=torch.float64 ) to initialize t as a DoubleTensor and the result should show 0..

Thanks for the clarification. I did not expect that a torch.tensor would act differently than numpy in this regard.