Pytorch dtype vs Numpy dtype w.r.t. cast

Hi !
I recently looked at Numpy dtypes (, more precisely how they are cast during expression evaluation.

When testing with Pytorch, I realize the same cast was not performed, and that I was getting a runtime error. Here is how I tested it:

>>> import torch
>>> import numpy
>>> x = torch.zeros([4], dtype=torch.int32)
>>> y = torch.zeros([4], dtype=torch.float32)
>>> x + y
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: expected type torch.FloatTensor but got torch.IntTensor
>>> x = numpy.zeros([4], dtype=numpy.int32)
>>> y = numpy.zeros([4], dtype=numpy.float32)
>>> x + y
array([0., 0., 0., 0.])
>>> (x + y).dtype
  1. Pytorch is not doing any cast and expect both side of the operation to have the same dtype. Is this statement right ?
  2. Is there any plan to follow numpy’s design or will it stay like this ?

Best regards,